Hostname: page-component-cd9895bd7-q99xh Total loading time: 0 Render date: 2024-12-23T09:37:58.218Z Has data issue: false hasContentIssue false

Uses of the novelty metrics proposed by Shah et al.: what emerges from the literature?

Published online by Cambridge University Press:  05 May 2023

Lorenzo Fiorineschi*
Affiliation:
Department of Industrial Engineering, University of Florence, Firenze, Italy
Federico Rotini
Affiliation:
Department of Industrial Engineering, University of Florence, Firenze, Italy Department of Industrial Engineering, Università Degli Studi di Firenze, Firenze, Italy
*
Corresponding author L. Fiorineschi [email protected]
Rights & Permissions [Opens in a new window]

Abstract

Several concepts and types of procedures for assessing novelty and related concepts exist in the literature. Among them, the two approaches originally proposed by Shah and colleagues are often considered by scholars. These metrics rely on well-defined novelty types and a specific concept of novelty; however, more than 20 years after the first publication, it is still not clear whether and to what extent these metrics are actually used, why they are used and how. Through a comprehensive review of the papers citing the main work of Shah, Vargas-Hernandez & Smith (2003a, 2003b) (the main study where the metrics are comprehensively described and applied), the present work aims to bridge this gap. The results highlight that only a few of the citing papers actually use the assessment approach proposed by Shah et al. and that a nonnegligible number uses a modified or adapted version of the original metrics. Furthermore, several criticalities in the application of the metrics have been uncovered, which are expected to provide relevant information for scholars involved in reliable and repeatable novelty assessments.

Type
Review Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2023. Published by Cambridge University Press

1. Introduction

The term ‘creativity’ is often misused, especially by industrial practitioners when promoting their own products and/or design processes. However, the problem is not limited to industry, since in academia, the term is sometimes used as a buzzword. Although creativity is often defined in different ways, making a shared definition unlikely (Doheim & Yusof Reference Doheim and Yusof2020), a plethora of studies provide comprehensive definitions, allowing us to clarify at least the fundamental pillars of creativity. More precisely, in the context of design, creativity has been defined by Amabile (Reference Amabile and Kidd1983) as the ability to create ideas that are novel, useful and appropriate, while Moreno et al. (Reference Moreno, Blessing, Yang, Hernández and Wood2016) define creativity as the ability to generate novel and valuable ideas. In other definitions, novelty is always present and can thus be considered the main pillar of any concept of creativity.

Unfortunately, the term ‘novelty’ has a multifaceted meaning, at least in the context of engineering design (Fiorineschi & Rotini Reference Fiorineschi and Rotini2021). For instance, Verhoeven, Bakker & Veugelers (Reference Verhoeven, Bakker and Veugelers2016) mentioned ‘technological novelty’, Boden (Reference Boden2004) considered both ‘historical novelty’ and ‘psychological novelty’ and Vargas-Hernandez, Okudan & Schmidt (Reference Vargas-Hernandez, Okudan and Schmidt2012) referred to the concept of ‘unexpectedness’. However, the nature of each of these definitions is quite different, which could generate misunderstandings because some of them refer to the ‘concept of novelty’ (i.e., to what is expected to be novel), while others refer to the ‘novelty type’ (i.e., to the reference to be considered for establishing the novelty of ideas). Indeed, according to Fiorineschi & Rotini (Reference Fiorineschi and Rotini2021), while ‘unexpectedness’ is a concept of novelty, ‘historical novelty’ and ‘psychological novelty’ define two possible novelty types.

The motivation for this work is the perceived need to improve the understanding of how and why novelty metrics are actually used in practice. Unfortunately, it is impossible to perform an in-depth analysis of the literature for all the available metrics in a single paper. Therefore, we decided to focus on one of the most cited contributions related to novelty metrics. More precisely, this literature review is performed by considering all the contributions that cite the article by Shah, Vargas-Hernandez & Smith (Reference Shah, Vargas-Hernandez and Smith2003a), commonly referenced and acknowledged by engineering design scholars (Kershaw et al. Reference Kershaw, Bhowmick, Seepersad and Hölttä-Otto2019), with 739 citations on Scopus (at March 2021). In particular, Shah et al. (Reference Shah, Vargas-Hernandez and Smith2003a) provided two families of novelty assessment procedures, namely the ‘a priori’ and the ‘a posteriori’ procedures. The ‘a priori’ approach can be associated with ‘historical novelty’ since it requires a reference set of solutions to assess the novelty of ideas. The ‘a posteriori’ approach does not require a set of reference solutions because the ideas to be assessed constitute the reference itself. In this specific procedure, novelty is calculated by counting the occurrences of similar ideas generated in the same session. Therefore, the a posteriori procedure is associated with ‘psychological novelty’ and with the ‘uncommonness’ or ‘unexpectedness’ of a specific idea.

The first article mentioning these two assessment approaches was published more than 20 years ago (Shah, Kulkarni & Vargas-Hernandez Reference Shah, Kulkarni and Vargas-Hernandez2000), just 3 years before the publication that more comprehensively demonstrates the validity of the metrics (Shah et al. Reference Shah, Vargas-Hernandez and Smith2003a). However, notwithstanding the wide diffusion of these seminal contributions, it is still not clear whether and to what extent the mentioned metrics are actually used. To bridge this gap, this work presents a comprehensive literature review of the articles and conference papers (in English) that cite the contribution of Shah et al. (Reference Shah, Vargas-Hernandez and Smith2003a) (hereinafter called SVS), since it is acknowledged to be the reference in which the metrics are comprehensively explained and tested. In this way, it is our intention to provide a clear picture of the actual usage of the two novelty assessment procedures (i.e., the a posteriori and the a priori procedures). More specifically, the objective of this work is to determine the acknowledged applications of the mentioned assessment procedures and provide insightful considerations for future developments and/or applications by discussing the observed criticalities.

This work does not perform an in-depth analysis of each paper, as this activity is unnecessary to accomplish the claimed objective. Rather, it performs an analysis of the literature contributions and is focused solely on the extraction of the specific information needed to build a clear representation of the use of SVS novelty metrics. Therefore, the review intentionally overlooks any paper that could present (in our opinion) certain flaws. Indeed, providing a reference to a contribution that presents a questionable application of the metrics without preforming a comprehensive discussion about the research work would surely appear defamatory for the authors and not ethically appropriate. The references of the reviewed works are of course listed in the following sections.

The content of the paper is organised as follows. A description of the two SVS metrics is provided in Section 2 together with a brief overview of the works that perform a literature analysis of novelty metrics in the engineering design field (i.e., the context where the work of Shah, Vargas-Hernandez & Smith Reference Shah, Vargas-Hernandez and Smith2003a is mostly cited). The research methodology is comprehensively described in Section 3 together with a detailed description of the key parameters used to perform the analysis of the literature. Section 4 presents the obtained results, which are subsequently discussed in Section 5. Finally, conclusions are reported in Section 6. It is also important to note that it was not possible to retrieve all the citing documents. Accordingly, a detailed list is reported in Appendix Table A1 about the documents that we were unable to retrieve.

2. Background

2.1. The novelty metrics proposed by SVS

The well-acknowledged set of metrics proposed by SVS constitutes a milestone for design creativity research. SVS considered four parameters affecting ‘idea generation effectiveness’ – that is, the novelty, variety, quality and quantity of generated ideas – proposing metrics to facilitate the assessment of these ideas. It is not in the scope of this paper to describe each parameter and the related metrics. For additional information, the reader can refer to the original work of SVS (Shah et al. Reference Shah, Vargas-Hernandez and Smith2003a), but it is also relevant to acknowledge that important analyses and improvements have been proposed in the literature about some of the SVS metrics (e.g., Nelson, Wilson & Yen Reference Nelson, Wilson and Yen2009; Brown Reference Brown2014; Fiorineschi, Frillici & Rotini Reference Fiorineschi, Frillici and Rotini2020a, Reference Fiorineschi, Frillici and Rotini2021).

The key of the SVS novelty metrics is in the equation used to assess the overall novelty of each idea (M) in a specific set of ideas. More precisely, the novelty is calculated through Eq. (1):

(1) $$ {M}_{SNM}\hskip0.35em =\hskip0.35em \sum \limits_{i\hskip0.35em =\hskip0.35em 1}^m{f}_i\sum \limits_{j\hskip0.35em =\hskip0.35em 1}^n{S}_{ij}{p}_j $$

where $ {f}_i $ is the weight of the $ {i}^{th} $ attribute and $ m $ is the number of attributes characterising the set of analysed ideas. The parameter $ n $ represents the number of design stages characterising the idea generation session, and $ {p}_j $ represents the weight assigned to the $ {j}^{th} $ design stage. Indeed, SVS observed that different attributes or functions can differently impact the overall novelty. Similarly, the same scholars also observed that the contribution to novelty can be affected by the considered design stage [e.g., conceptual design, embodiment design (Pahl et al. Reference Pahl, Beitz, Feldhusen and Grote2007)]. The parameter $ {S}_{ij} $ represents the ‘unusualness’ or the ‘unexpectedness’ of the specific solution used by the analysed idea to implement the $ {i}^{th} $ attribute at the $ {j}^{th} $ design stage.

From Eq. (1), it is possible to infer that the SVS novelty assessment procedure underpins the concept of functions or key attributes that can be identified within the set of generated ideas. In other words, the novelty of an idea ‘I’ in the SVS approach is relative to a specific universe of ideas{U}. Each idea ‘I’ is considered a composition of the solutions that implement each function (or attribute) used to represent the idea (one solution for each function/attribute). However, according to SVS, the identification of functions and attributes is case sensitive.

The most important parameter of Eq. (1) is $ {S}_{ij} $ , and two different approaches have been proposed by SVS to obtain the related value, that is, the ‘a posteriori’ and the ‘a priori’ approaches.

In the a priori approach, the $ {S}_{ij} $ value is ‘assigned’ by judges that compare the idea (decomposed in terms of attributes and related solutions) against a reference set of existing products. More precisely, SVS reports, ‘…a universe of ideas for comparison is subjectively defined for each function or attribute, and at each stage. A novelty score S1 [the value “1″ identifies the novelty metric in the SVS set of metrics] is assigned at each idea in this universe’. Therefore, the ‘a priori’ approach is based on subjective evaluations made by referring to subjective universes of ideas. This kind of approach then underpins the personal knowledge of the judges, similar to what occurs with other well-acknowledged novelty assessment approaches used in the context of design studies (e.g., Hennessey, Amabile & Mueller Reference Hennessey, Amabile and Mueller2011; Sarkar & Chakrabarti Reference Sarkar and Chakrabarti2011; Jagtap Reference Jagtap2019).

In contrast, in the a posteriori approach, the $ {S}_{ij} $ for each attribute is calculated by Eq. (2):

(2) $$ {S}_{ij}\hskip0.35em =\hskip0.35em \frac{T_{ij}-{C}_{ij}}{T_{ij}}\times 10 $$

where $ {T}_{ij} $ is the total number of solutions (or ideas) conceived for the $ {i}^{th} $ attribute at the $ {j}^{th} $ design stage and $ {C}_{ij} $ is the count of the current solution for the $ {i}^{th} $ attribute at the $ {j}^{th} $ design stage. Therefore, in this case, the reference universe of ideas is the same set of generated ideas. Indeed, the evaluator (it is preferable not to use the term ‘judge’ to avoid confusion with the ‘a priori’ rating approach) is asked to count the number of times a specific solution for a specific attribute (or function) appears within the set of generated ideas (for each design stage). Then, such a value $ {C}_{ij} $ is compared with the total number of solutions generated for the specific attribute in the specific design stage ( $ {T}_{ij}\Big) $ , obtaining a value of the ‘infrequency’ of that solution. Accordingly, the a posteriori approach is heavily affected by the specific set of analysed ideas, and therefore, the novelty value is relative to that specific set of ideas.

2.2. What do we know about SVS novelty assessment approaches?

In the last decade, perhaps due to their widespread success and dissemination through the scientific community, the SVS assessment procedures have been deeply investigated and discussed. For example, Brown (Reference Brown2014) reviewed and discussed some novelty assessment approaches. Concerning SVS, Brown highlighted some problems related to the subjective identification of functions or key attributes, the subjective identification of the weights for each attribute and the additional difficulty of separating the ideas according to the design stages (if the user intends to use Eq. (1) in its complete form). Srivathsavai et al. (Reference Srivathsavai, Genco, Hölttä-otto and Seepersad2010) reported that the ‘a posteriori’ novelty assessment approach proposed by SVS cannot be used to assess ideas in relation to existing ideas (or products). In other words, considering the definitions provided by Boden (Reference Boden2004), the SVS ‘a posteriori’ novelty assessment approach is not related to historical novelty. However, such a metric can be successfully used to assess psychological novelty (i.e., what is novel for those who actually generate the idea). In theory, the ‘a priori’ approach should be capable of assessing historical novelty; however, its actual use in the scientific community appears unclear.

Furthermore, Sluis-Thiescheffer et al. (Reference Sluis-Thiescheffer, Bekker, Eggen, Vermeeren and De Ridder2016) observed that in certain circumstances, the ‘a posteriori’ approach can lead to misleading novelty scores (i.e., too high), even if similar solutions appear quite often in the examined set. Vargas-Hernandez et al. (Reference Vargas-Hernandez, Paso, Schmidt, Park, Okudan and Pennsylvania2012) claimed that the ‘a posteriori’ approach could be improved to better address changes within the examined sets of ideas and to be more effectively applied to boundary situations.

These and other observations have been made, which we summarised in comprehensive reviews that accurately describe and discuss each of them (e.g., Fiorineschi & Rotini Reference Fiorineschi and Rotini2019, Reference Fiorineschi and Rotini2021; Fiorineschi, Frillici & Rotini Reference Fiorineschi, Frillici and Rotini2020b). Additionally, we discovered and examined a particular issue concerning the ‘a posteriori’ approach, that is, the problem of ‘missing’ or ‘extra’ attributes. We examined the problem in depth (Fiorineschi et al. Reference Fiorineschi, Frillici and Rotini2020a), and we also proposed a comprehensive solution to the problem (Fiorineschi et al. Reference Fiorineschi, Frillici and Rotini2021).

However, despite the comprehensive set of studies that analyse and/or discuss SVS novelty assessment, there are still many questions to be answered. In particular, this work points to the following evident gaps that should be bridged to better apply the metrics in the future:

  1. Q1) What is the rate of use of the SVS novelty assessment approaches within scientific works?

  2. Q2) What is the rate of use of the ‘a priori’ and ‘a posteriori’ versions of the SVS novelty assessment approach?

  3. Q3) What is the level of consciousness of SVS users in terms of the novelty type and concept needed for the specific experiment?

  4. Q4) What is the rate of use of the ‘multiple design stages’ capability of the SVS novelty assessment approach?

  5. Q5) What is the range of applicability for the SVS novelty assessment approaches?

  6. Q6) How many scholars comprehensively use SVS novelty assessment approaches with multiple evaluators and perform an interrater agreement test?

  7. Q7) How many research works, among those using SVS novelty assessment approaches, comprehensively describe the following assessment rationale?

The following section provides a description of the investigation targets related to each question, together with a comprehensive description of the following research methodology.

3. Materials and methods

3.1. Description and motivation of the key investigation targets

The seven questions reported in Subsection 4 constitute the key investigation targets on which this literature review swivels. Investigation targets are considered here as information categories that are expected to be extracted from the analysis of the literature. Table 1 reports a short description of each of them.

Table 1. List of the investigation targets related to the questions formulated in Subsection 4

More specifically, according to Q1, the review is expected to clarify the extent to which the work of SVS is actually cited for the application of the related novelty assessment procedures. Then, through Q2, we aim to clarify the extent to which each of the two assessment approaches (‘a priori’ and ‘a posteriori’) is used.

It is also important to understand whether and to what extent the scholars that use SVS approaches are conscious of the actual range of applicability of the SVS novelty assessment approaches. A first indication is given by Q3, which is expected to reveal that most of the references are to the definitions provided by SVS. Accordingly, in addition to the considered novelty concept, we intend to search for any reference to the considered novelty type, that is, psychological novelty or historical novelty.

Q4 has been formulated as, at first glance, we preliminarily observed a lack of use for the multiple stages capability of Eq. (1). It is important to further investigate this issue, since if confirmed, it would be important to investigate the reasons behind this lack.

Q5 is expected to provide a comprehensive view of the current range of use of SVS metrics. Such information, although not sufficient alone to be considered as a guide, is expected to provide a preliminary indication about the types of application where the SVS novelty metrics have been used.

Q6 and Q7 are also two critical questions (and related investigation targets) that aim to investigate the extent to which the assessment is performed with a robust approach (Q6) and whether the provided information is sufficient to understand and/or to repeat the experiments (Q7).

3.2. Methodological approach for the literature review

To perform a valuable literature review, Fink (Reference Fink2014) reported that it is necessary to conduct the analysis systematically. In other words, it is important to adopt a sound and repeatable method for the identification, evaluation and synthesis of the actual body of knowledge. Purposeful checklists and procedures for supporting comprehensive literature reviews can be found in the literature (e.g., Blessing & Chakrabarti Reference Blessing and Chakrabarti2009; Okoli & Schabram Reference Okoli and Schabram2010; Fink Reference Fink2014), whose main points can be summarised as follows:

  1. 1. Clear definition of the review objectives, for example, through the formulation of purposeful research questions.

  2. 2. Selection of the literature database to be used as a reference for extracting the documents.

  3. 3. Formulation of comprehensive search queries to allow in-depth investigations.

  4. 4. Definition of screening criteria to rapidly skim the set of documents from those that do not comply with the research objectives.

  5. 5. Use of a repeatable procedure to perform the review.

  6. 6. Analysis of documents and extraction of the resulting information according to the research objectives.

Point 1 has been achieved by the questions formulated in Section 2, with the related investigation targets described in Subsection 3.1. According to the second point, we selected the Scopus database as the reference for performing the literature review and extracting the references of the documents to be analysed. The achievement of the third point has been relatively simple in this case since there was no intention to identify the papers through the adoption of complex search queries. Indeed, all documents citing the paper of Shah et al. (Reference Shah, Vargas-Hernandez and Smith2003a,Reference Shah, Vargas-Hernandez and Smith b) were considered (739 in March 2021).

Points 4–6 have been achieved by means of the procedure represented in Figure 1, which is described in the following paragraphs.

Figure 1. Procedure used to perform the literature review described in this paper.

Referring to Figure 1, Step 1 was performed by using the Scopus search engine. In particular, the SVS paper (Shah et al. Reference Shah, Vargas-Hernandez and Smith2003a) was identified and exploited to access the list of citing documents. Each citing document was downloaded from the publisher, or when this was not possible, a shareable copy was directly sought from the authors. Unfortunately, a nonnegligible number of documents have not been retrieved (a complete list of these documents is provided in Appendix Table A1).

The very first analysis performed on the achieved documents is represented by Step 2 in Figure 1. In other words, each paper was rapidly screened to verify the presence of SVS metrics or any metric somehow related to SVS. To accomplish this, one of the authors rapidly searched for any reference about SVS in the paper’s sections that provide information about the research methodology. The approach used for this verification was to perform in-text searches within each article (e.g., for the terms ‘novelty’, ‘creativity’, ‘uncommonness’, etc.) and/or to directly search for the SVS citation with a subsequent analysis of ‘why’ it was cited. Although the number of processed documents was high, this analysis was relatively simple and allowed us to perform the screening with a rate of from 10–20 papers per day.

The subset of papers obtained by Step 2 was further processed to clearly identify the documents using SVS metrics in their original form (Step 3). This activity was time-consuming, as it required a clear understanding of the rationale behind the adopted novelty assessment approach. Fortunately, as shown in Section 4, the number of papers that passed Step 2 was quite limited. The outcomes of Step 3 were then expected to answer Q1 and Q2 and thus to accomplish the first two investigation targets listed in Table 1. Consequently, the papers using the original form of the novelty assessment procedures were further processed to find the information related to the investigation targets from 3 to 7, as listed in Table 1 (Step 4 in Figure 1).

Similar to Step 2, to retrieve information about the novelty concept (Q3), each document among those that complied with Substep 3.1 (Figure 1) was processed by in-text searches for the words ‘novelty’, ‘originality’, ‘newness’, ‘uncommonness’, ‘unusualness’ and ‘unexpectedness’. In this way, it was possible to rapidly identify the parts of the documents that discuss novelty concepts and to understand which of them (if any) were considered in the analysed work. In addition, the terms ‘psychological’ and ‘historical’ (as well any citation to the works of Boden) were searched throughout the text to verify the presence of any description of the type of novelty (i.e., psychological novelty or historical novelty).

To retrieve the information relevant to Q4, it was necessary to deeply analyse the methodological approach used in the reviewed papers. Indeed, it was important to clearly understand the design phases that were actually considered in the experiments.

The extraction of the information related to Q5 required the identification of the motivations and the research context underpinning each analysed document. Often, it was necessary to search in the paper for information about the application type. This was accomplished by focusing on the introduction, the methodology description and the discussion of each revised article. The extracted application types were grouped into a set of categories that we formulated by following a subjective interpretation (there was no intention to provide a universally shared set of definitions about the application contexts of the metrics). The set was created on the basis of the differences that we deemed useful to provide a wide overview of the current application range of the SVS approaches for novelty assessment.

To retrieve the information related to Q6, it was necessary to analyse the sections that describe the results and the methodological approach followed in the considered papers. The purpose of this specific analysis was to search for any trace of interrater agreement tests. In the first phase, we verified whether the assessment was performed by more than one evaluator. Then, for the papers presenting more than one evaluator, the analysis focused on the presence of any type of interrater agreement test.

Finally, for Q7, we checked each paper to understand whether and to what extent there was any description of the rationale used to perform the assessment. More specifically, the searched set of information is as follows:

  • Availability of the complete set of ideas.

  • Description of the method used to identify the attributes (or functions) and the related weights.

  • Description of the rationale used to include or exclude ideas with missing or extra attributes.

  • Description of the process used to assess each idea in relation to the identified attributes.

Substep 3.2 (Figure 1) was added after the analysis of a first chunk of retrieved documents, which allowed us to highlight that a nonnegligible number of the documents that passed Step 2 did not use the original version of the SVS approaches. The purpose of this step was to collect these modified versions of the metrics and to extract the available information (if any) about the reasons that led the authors to not use the original version.

4. Results

4.1. Results from the investigation target related to Q1

The analysis revealed that only a very small part of the documents that cited the work of Shah et al. (Reference Shah, Vargas-Hernandez and Smith2003a,Reference Shah, Vargas-Hernandez and Smith b) actually used the related metrics for novelty assessments, as shown in Figure 2 for the papers from 2003 to 2020 (the year 2021 was not considered since the data are updated only to March). The figure also shows that many papers did not use an original version of the SVS approaches. More precisely, excluding the 53 papers for which it was impossible to retrieve the document (see Appendix Table A1), the examined set comprises 686 papers. Among them, 61 papers used the original version of the SVS metrics, while 72 used a modified version (see Table 2 for the complete list of references). Therefore, the original versions of the SVS novelty assessment were used in approximately 9% of papers belonging to the examined set.

Figure 2. Citation trend for the work of Shah, Vargas-Hernandez & Smith (Reference Shah, Vargas-Hernandez and Smith2003a,Reference Shah, Vargas-Hernandez and Smith b). The figure also reports the portion of those citations from papers actually using an original version of the SVS approach or a modified version.

Table 2. List of documents identified according to Substeps 3.1 and 3.2 in Figure 1

To better show the results and to link the contents to the complete set of related references, we used an alphanumeric code (e.g., S10 or A21). The letter indicates whether the document uses an original SVS approach (letter “S”) or an alternative approach (letter “A”). The number identifies the article, according to the order shown in Table 2.

It is not within the scope of this paper to comprehensively analyse the 72 papers that used a modified version. However, a complete list of metrics is reported in Table 3, together with the indication of the papers that used them.

Table 3. List of metrics used in the works that refer to SVS but use different approaches

Note: See Table 2 for reference codes.

Unfortunately, the authors of the papers that are listed in Table 3 often failed to provide comprehensive motivations behind the need to use a metric different from the original SVS. A motivation was provided by Moore, Sauder & Jin (Reference Moore, Sauder and Jin2014), paper A38 in Table 2. The authors stated that they used a different metric because they were focused on the novelty of each design entity and not on the total novelty of the entire process.

Another case is that of Filippi & Barattin (Reference Filippi and Barattin2016), paper A19 in Table 2, who simply report that their metric resembles SVS but actually measures something different from novelty.

4.2. Results from the investigation target related to Q2

Figure 3 graphically reports the results obtained for Q2, according to Substep 4.1 in Figure 1. In particular, the results show that among the 61 papers that use the original version of the SVS, 63.9% use the ‘a posteriori’ approach, and only 8.2% use the ‘a priori’ version. To these percentages, it is also necessary to add the papers that use both approaches (3.3%). However, it is important to observe that a nonnegligible percentage of the examined documents (24.6%) does not provide sufficient information to understand which of the two approaches was used.

Figure 3. Percentages of the papers using the two different novelty assessment approaches originally proposed by SVS.

The latter information is critical. Indeed, while the use of one approach in place of the other (or both) can be a consequence of the experimental requirements (assuming that the authors correctly selected the most suited one), it is unacceptable for a scientific paper to fail to provide the information to understand which assessment procedure was used.

4.3. Results from the investigation target related to Q3

Concerning the level of consciousness that SVS users have in terms of novelty concepts, we found that the terms ‘unusualness’ and ‘unexpectedness’ are used very often. This result was expected since these terms are used in the original SVS work. However, as shown in Figure 4, other terms or a ‘mix of terms’ were also used among the reviewed works. More importantly, Figure 4 also shows that a nonnegligible part (24.6%) of the examined articles (among the 61 that use the original SVS metrics) does not provide any information about the considered novelty concept.

Figure 4. Terms used by the different authors to identify the novelty concept underpinning the assessment performed through the SVS metrics.

While it is often acceptable to use a specific term in place of others (e.g., due to the absence of a shared definition of novelty), it is unacceptable that some scientific works completely fail to provide any reference about the novelty concept. Indeed, the reader needs to understand what the authors intended when referring to novelty and/or when defining the investigation objectives. Independent of the correctness of the provided definitions, this is crucial information that cannot be neglected. Missing any reference to the considered concept necessarily implies that the authors failed to collect sufficient information about the plethora of different ways that actually exist for defining and assessing novelty.

Concerning the novelty type, we observed that almost all the examined papers (among the 61 that use the original SVS metrics) do not provide any specification about the actual need to assess psychological novelty or historical novelty. Only in three cases was the specification correctly included (Gosnell & Miller Reference Gosnell and Miller2015; Fiorineschi, Frillici & Rotini Reference Fiorineschi, Frillici and Rotini2020a,Reference Fiorineschi, Frillici and Rotini b). This result is also critical. Indeed, the selection of the most suited novelty assessment procedure (and the related metric) should be performed on the basis of ‘what is actually needed’ and not on the basis of naïve motivations such as whether a procedure ‘is the most cited’ or ‘is often used by scholars’. In the specific case of SVS metrics, the authors should always be aware that when using the a posteriori approach, they are actually measuring a psychological novelty, which cannot be interchangeably used with the historical novelty.

4.4. Results from the investigation target related to Q4

The possibility of applying Equation 1 to multiple design stages was not exploited by the 61 papers collected with Substep 3.1 in Figure 1.

Unfortunately, no explanation for the reasons behind this lack of use was found in the reviewed documents. However, it is possible to infer that the reason for this lack lies in the nature of the SVS metrics. Indeed, the SVS metrics were formulated to assess the effectiveness of idea generation, and it is widely acknowledged that the most creative part of the design process is the conceptual design phase. According to what was observed in the reviewed papers, we can assume that the studies using the SVS metrics are almost totally related to conceptual design activities.

4.5. Results from the investigation target related to Q5

Concerning the types of applications where the SVS metrics were applied, the performed analysis (as described in Section 3) led to four different groups (see Figure 5). A short description of each group is reported in Table 4, together with the related list of coded references.

Figure 5. Types of application identified for the SVS novelty metrics.

Table 4. Application types identified for the SVS novelty metrics

As shown in Figure 5, a majority of works were about idea generation, followed by studies about creativity, innovation or novelty metrics.

All of the reviewed contributions are related to academia. Indeed, even if industry was indirectly mentioned or involved, the application of novelty metrics was always performed by academic staff for academia-related purposes. Therefore, there is no evidence of the use of SVS in industry.

4.6. Results from the investigation target related to Q6

Figure 6 reveals that only a portion of the reviewed works used an Inter Rater Agreement (IRR) test to validate the novelty assessment. At first glance, the application rate of the IRR is quite irregular, without any evident trend. There appeared to be an increasing trend at least from 2009 to 2018, but the use of the IRR rapidly decreased thereafter.

Figure 6. Papers that use SVS novelty metrics assessment and apply interrater agreement tests.

Surprisingly, in some specific years, the IRR was never applied to novelty assessment results (i.e., 2006, 2007, 2011, 2014, 2020 in Figure 6). In particular, only 18 contributions (among the 61 that use the original SVS novelty assessment approach) mentioned an IRR test (e.g., Shah, Vargas-Hernandez & Smith Reference Shah, Vargas-Hernandez and Smith2003b; Kurtoglu, Campbell & Linsey Reference Kurtoglu, Campbell and Linsey2009; Johnson et al. Reference Johnson, Caldwell, Cheeley and Green2016; Vandevenne, Pieters & Duflou Reference Vandevenne, Pieters and Duflou2016; Bao, Faas & Yang Reference Bao, Faas and Yang2018).

4.7. Results from the investigation target related to Q7

It emerged that a large part of the reviewed documents did not provide any information to allow a comprehensive understanding and repeatability of the assessment. We are conscious that due to ethical and/or professional agreements, it is often impossible to share whole sets of ideas (e.g., images, sketches or CAD files). This constraint could be the main reason behind the absence of documents that allow a complete replication of the experiment. Indeed, without the original set of ideas, it is not possible to repeat the same assessment. However, as shown in Figure 7, a nonnegligible number of the reviewed documents report ‘partial’ but important information about the assessment procedure (e.g., description of the procedure used to identify the attributes and the related weights, list of attributes, set of assessed ideas codified in terms of functions and attributes, etc.).

Figure 7. Percentages of articles that describe rationale for using the original SVS novelty assessment approaches. In particular, the graph shows how many documents do not report, partially report or are not required to report information about the assessment rationale. None of the reviewed papers completely reported the information required to ensure the repeatability of the experiment.

Only in two cases (among the 61 articles that used the original SVS assessment procedure) was the description of the rationale not needed. Indeed, in these cases, the authors do not consider the assessment from a real experiment or design session but use ad hoc data to explain their proposal (Nelson et al. Reference Nelson, Wilson, Rosen and Yen2009) or to explain potential SVS issues in border cases (Fiorineschi et al. Reference Fiorineschi, Frillici and Rotini2018a).

5. Discussion

5.1. Findings

According to the seven research targets reported in Table 1 and then according to the related research questions formulated in Section 2, the outcomes from this review work can be summarised as shown in Table 5.

Table 5. Findings from each of the seven research questions introduced in Section 2

SVS novelty metrics are used by less than 10% of the citing contributions (and almost all the contributions use the ‘a posteriori’ version), notwithstanding the high number of citations received by the work of Shah et al. (Reference Shah, Vargas-Hernandez and Smith2003a). Indeed, many of the reviewed contributions consider other contents of the paper by Shah et al. (Reference Shah, Vargas-Hernandez and Smith2003a). For example, Dym et al. (Reference Dym, Agogino, Eris, Frey and Leifer2005) refer to the SVS concept of variety, while Chandrasegaran et al. (Reference Chandrasegaran, Ramani, Sriram, Horváth, Bernard, Harik and Gao2013) cite Shah et al. (Reference Shah, Vargas-Hernandez and Smith2003a) when discussing creativity-related aspects. In general, the work of Shah et al. (Reference Shah, Vargas-Hernandez and Smith2003a,Reference Shah, Vargas-Hernandez and Smith b) is often mentioned when discussing creativity and idea generation concepts and definitions (e.g., Charyton & Merrill Reference Charyton and Merrill2009; Linsey et al. Reference Linsey, Tseng, Fu, Cagan, Wood and Schunn2010; Crismond & Adams Reference Crismond and Adams2012; Gonçalves, Cardoso & Badke-Schaub Reference Gonçalves, Cardoso and Badke-Schaub2014).

The categories of applications are congruent with the original intent of the SVS work, that is., to support design creativity research with a systematic and practical approach to perform repeatable assessments. However, the observed (partial or total) lack of information about the assessment rationale implies that the experiments described in the reviewed papers cannot be repeated (or comprehensively checked) by scholars. Indeed, as highlighted by Brown (Reference Brown2014) and confirmed by our recent works (Fiorineschi et al. Reference Fiorineschi, Frillici and Rotini2020a,b), the identification of the functions and/or attributes of the assessed ideas and the definition of the related weights are highly subjective.

Furthermore, it was a disappointing surprise to see that interrater agreement tests are often not performed or even mentioned at all. Due to the previously mentioned issue of subjectivity (which characterises any creativity assessment approach), it is always necessary to involve multiple evaluators (at least 2) and then to carefully check the robustness of the obtained scores. Fortunately, comprehensive interrater agreement approaches (e.g., Cronbach Reference Cronbach1951; Cohen Reference Cohen1960; Hayes & Krippendorff Reference Hayes and Krippendorff2007) were used in some of the reviewed works (e.g., Shah et al. Reference Shah, Vargas-Hernandez and Smith2003b; Kurtoglu et al. Reference Kurtoglu, Campbell and Linsey2009).

Another important flaw detected in the reviewed documents that use the original SVS approaches is the lack of comprehensive descriptions about the actual type of novelty that the investigation is expected to assess. In our recent review of novelty metrics, we identified a map to orient among the variety of different metrics (Fiorineschi & Rotini Reference Fiorineschi and Rotini2021), and one of the most important parameters is the type of novelty, intended as historical novelty or psychological novelty. The two SVS novelty assessment approaches belong to two distinct types of novelty but implement the same concept of novelty (uncommonness or unexpectedness). Any comprehensive work using the SVS metrics should demonstrate that their use is actually compatible with the expected measures.

5.2. Implications

The present work provides a clear picture of the actual usage of the SVS novelty assessment approaches, making it possible to understand the main flaws that characterise most of the published works. Implications from the observations extracted for each research question (see Table 5) are summarised in Table 6.

Table 6. Implications of the results presented in Table 5

The implications expressed in Table 6 can be reformulated by better focusing on both readers and external agents. In the bullet list reported below, we try to do so by referring to scholars, teachers, reviewers, PhD supervisors and industrial practitioners:

  • Implications for researchers who use the metrics and for paper reviewers:

    The implication derived from the findings related to Q6 deserves particular attention. Indeed, these findings imply that most of the reviewed works actually do not follow a robust assessment procedure. It is acknowledged that novelty assessment is necessarily performed by means of subjective interpretations of the assessed design or idea, and this also applies to the SVS metrics. Therefore, any assessment that fails to perform an IRR test among the evaluators cannot be considered scientifically valid.

Additional implications can be extracted from the findings pertaining to Q7. Indeed, here, the evidence emerges of a substantial impossibility (for most of the research works) of providing the information required to allow the repetition of the assessment with the same data. To increase the ability to perform comparable experiments, it is necessary to allow the reader to understand any detail about how the experiment has been conducted. First, the assigned design or idea generation task should be carefully described, as should the method used to administer it to the sample (or samples) of participants. In addition, it is important to carefully describe the considered sample of participants in terms of knowledge background, design experience, and, of course, gender, ethnographic and age distributions. Additionally, it is essential to carefully provide in-depth information about the experimental procedure. More specifically, it is crucial to indicate the time chosen to introduce the task, the time allotted to perform it and (if any) the presence of possible pauses (e.g., to enable incubation). Finally, it is necessary to provide indications about how participants worked, for example, whether they worked alone or in groups, whether they worked in a single design session or in subsequent design sessions, whether they worked on a single task on multiple tasks, etc.

It is also important to provide detailed information about the postprocessing activity (i.e., the novelty assessment for the scope of this work). More specifically, it is crucial to provide information about the evaluators, their expertise and the related background. The assessment procedure also needs to be carefully described in detail. In particular, it is important to mention and describe prealignment sessions among raters, as well as the procedure that each of them followed to identify the attributes from the ideas to be assessed.

Without the information described above, any research work where designs or ideas are assessed in terms of novelty (as is the case for any creativity-related assessment) cannot be considered as a reference.

Generally, these considerations can be extended to each of the acknowledged novelty assessment approaches.

  • Implications for teachers:

    What emerged in this paper is the need for a better understanding of the available concepts and types of novelty. Teaching creativity in design contexts should not neglect how to measure it. In this sense, due to the plethora of different metrics and assessment procedures, it is important that future designers or scholars start their professional lives with a sound understanding of ‘what can be measured’. Indeed, a clear understanding of the novelty concepts and types to be measured will surely help in selecting the most suitable way to assess novelty and then to assess creativity.

  • Implications for PhD supervisors:

    The need for more comprehensive descriptions expressed in the previous points should constitute a reference for PhD supervisors. Indeed, PhD students are often expected to perform experimental activities and then to apply (when needed) novelty metrics. First, it is necessary to identify ‘what’ is intended to be assessed. As demonstrated by the flaws highlighted in this paper, this is not a trivial task, and the aim is to clarify both the novelty type and the novelty concept (Fiorineschi & Rotini Reference Fiorineschi and Rotini2021). Then, before starting any experimental activity, it is important to plan it by considering the parameters to be considered in the assessment procedure (depending on the specific metric). Eventually, each experimental step must be opportunely archived and described to allow a comprehensive description of the adopted procedure. It is fundamental to allow reviewers to evaluate the robustness and the correctness of the experiment, as well as to allow scholars to repeat it in comparable boundary conditions.

  • Implications for future development of novelty metrics:

    A particular criticality observed among the reviewed works concerns the presence of papers that cite SVS but use specifically developed metrics. The criticism is based on the observed absence of sound motivations or justifications about the actual need to use or to propose an alternative metric. As already highlighted in this paper, it is not our intention to denigrate any work, but it is important to provide a clear message for future works. More specifically, it is necessary to stem the trend of creating ‘our own’ metrics to be used in place of already published ones without providing any reasonable motivations and, more importantly, without any proof about the actual improvement provided in relation to existing metrics. Indeed, the field of novelty metrics is already very populated by different alternatives, and it is already quite difficult to navigate them. We are aware of this challenge because we personally tried to review novelty metrics to support our selection (Fiorineschi et al. Reference Fiorineschi, Frillici and Rotini2019b; Fiorineschi & Rotini Reference Fiorineschi and Rotini2020, Reference Fiorineschi and Rotini2021). This does not mean that new metrics proposals or improvements should be avoided. This would be a hypocritical suggestion from us, since we have also proposed new metrics (Fiorineschi, Frillici & Rotini Reference Fiorineschi, Frillici and Rotini2019a, Reference Fiorineschi, Frillici and Rotini2021). What is important for future works that deal with novelty assessment is to use metrics that have been tested (in precedent works or in the same work) and to comprehensively describe the motivation for the choice. In particular, it is necessary to ensure that the selected metric is compatible with novelty type and concept type, which are expected to be measured according to the experimental objectives.

  • Implications for design practice:

    Indirectly, this work could also help in design practice. Indeed, the major understanding of metrics that this paper provides is intended to support both better novelty measurements and more efficient method selection. More precisely, the future possibility of improving the selection and arrangement of ideation methods will ultimately help develop better products.

5.3. Limitations and research developments

The work suffers from two main limitations, which have been partially highlighted in the previous sections. First, the analysis is limited to the citations to the work of Shah et al. (Reference Shah, Vargas-Hernandez and Smith2003a) as of March 2021, and second, we were unable to retrieve a nonnegligible number of documents (see Appendix Table A1). Indeed, our institution did not provide access to all contributions, and we tried to retrieve as many of these as possible by directly asking for a free copy from the authors. Nevertheless, scholars could take inspiration from this work to perform similar analyses on other well-known assessment approaches (e.g., Hennessey et al. Reference Hennessey, Amabile and Mueller2011; Sarkar & Chakrabarti Reference Sarkar and Chakrabarti2011). With additional works such as this one, it may be possible to generate a clear map of the actual usage of the creativity assessment approaches used in the field of design creativity research.

Concerning the considered research questions, as mentioned above, we have identified the seven questions introduced in Section 2, which arise from our experience with in-depth (theoretical and practical) studies on novelty. We are therefore conscious that this work may not encompass all the doubts and questions about the argument.

An important limitation also concerns the missing analysis of the alternative metrics used in place of the SVS approaches. We believe that this kind of analysis can be considered feasible for future research, for which the list of documents reported in Table 3 could pave the way. The expected outcome from this possible extension of the work is a complete list and analysis of the whole set of metrics somehow inspired by or based on the SVS metrics. For that purpose, the list of metrics summarised in Table 3 could be integrated with other contributions (identified by following a different approach) that were already reviewed in our previous works (Fiorineschi et al. Reference Fiorineschi, Frillici and Rotini 2019b; Fiorineschi & Rotini Reference Fiorineschi and Rotini2019).

A further limitation of this work concerns the absence of any comprehensive analysis or discussion about the works that, even if not using the metrics, provide important considerations about the pros and cons of the SVS approach. However, we intentionally avoided this kind of analysis to avoid overlap with our recently published works that are focused on this kind of investigation (Fiorineschi et al. Reference Fiorineschi, Frillici and Rotini2018a, Reference Fiorineschi, Frillici and Rotini2020a,Reference Fiorineschi, Frillici and Rotini b, Reference Fiorineschi, Frillici and Rotini2021), where the reader can find an updated set of related information.

5.4. Expected impact

The main impact expected from this work is upon academic research on design creativity and can be summarised as follows:

  • Help scholars understand the application field of the SVS novelty assessment approach.

  • Promote comprehensive assessments through the correct application of the metrics by at least two evaluators, whose scores should always be checked by an IRR test.

  • Invite scholars to provide sufficient information about the assessment rationale that has been followed for the application of the selected novelty metric.

  • Describe the need for clear indications about the considered novelty type that the selected metric intends to measure (i.e., historical novelty or psychological novelty).

Generally, it is expected that this work could shed further light in the field of design creativity research by focusing on the novelty assessment of generated ideas. In particular, the criticalities highlighted in this work are expected to push scholars towards a more responsive use of metrics and to follow robust assessment procedures. This is fundamental to improving the scientific value of studies focused on novelty-related works about design and design methods and tools.

Unfortunately, it is not possible to formulate any ‘short-term’ expectation from the industrial point of view. Indeed, this work highlights that the use of the metrics is limited to academic fields (at least according to what is claimed in the analysed contributions). Therefore, a question may arise: ‘Are novelty metrics from academia useful for the industrial field?’

However, to answer this question, at least two other issues should be considered:

  • ‘Are industrial practitioners sufficiently aware of the presence of novelty assessment procedures’?

  • ‘Are industrial practitioners conscious of the potential information that can be extracted by the application of novelty and other creativity-related metrics?’

Therefore, even if a direct impact cannot be expected for industry, this work can push scholars to bridge the gap that currently exists between academia and industry in relation to the topic.

6. Conclusions

The main focus of this work is to shed light on the actual use of the two novelty assessment procedures presented by Shah et al. (Reference Shah, Vargas-Hernandez and Smith2003a,Reference Shah, Vargas-Hernandez and Smith b) For that purpose, we formulated seven research questions whose answers have been generated through the analysis of the literature contributions that cite the work of Shah et al. (Reference Shah, Vargas-Hernandez and Smith2003a,Reference Shah, Vargas-Hernandez and Smith b). We found that even if scholars actually use the SVS novelty assessment approaches, the citations to the work of Shah et al. (Reference Shah, Vargas-Hernandez and Smith2003a,Reference Shah, Vargas-Hernandez and Smith b) often do not refer to novelty assessment. In addition, scholars frequently use a different version of the SVS metrics without providing explicit motivations behind this need. However, the most important issue highlighted in this work is the almost total lack of any reference to the type of novelty (historical or psychological) that scholars intend to assess when they use SVS metrics. This kind of information is fundamental and allows the reader to understand whether the selected metrics are suitable to obtain the expected results. Furthermore, the results highlight that, except for a few cases, the information provided in the reviewed works is not sufficient to repeat the experiments and/or the assessments in comparable conditions. Indeed, it emerged that only a few works provide a comprehensive description of the assessment rationale and apply interrater agreement evaluations. This is a critical aspect that should be considered by scholars for improving future literature contributions where novelty assessments are involved. As a further result, the methodology applied in this work can be reused by scholars to perform similar analyses on other metrics, not only limited to the field of creativity assessment. Indeed, several metrics exist in the literature (e.g., the quality of a design, modularity level of a system, sustainability of products and processes, etc.), and similar to what occurs in creativity assessment, comprehensive selection guidelines are still missing. Indeed, the expected impact of this work is to help scholars perform novelty assessments with improved scientific value. Eventually, studies such as this can contribute to the discussion about how to use metrics in different branches of engineering design and research.

Acknowledgments

This work was supported by the Open Access Publishing Fund of the University of Florence.

Appendix

A.1. List of the articles for which it was impossible to retrieve the document in this literature review

Table A1. Not retrieved articles

References

Abid, A., Shi, W. & Toh, C. A. 2018 The ends or the means? Understanding how students use causal and effectual information during design activities. Proceedings of the ASME Design Engineering Technical Conference 7, 110.Google Scholar
Ahmed, F., Ramachandran, S. K., Fuge, M., Hunter, S. & Miller, S. 2018 Interpreting idea maps: pairwise comparisons reveal what makes ideas novel. Journal of Mechanical Design 141 (2), 021102.CrossRefGoogle Scholar
Amabile, T. M. 1983 The Social Psychology of Creativity (ed. Kidd, R. F.). Springer-Verlag; doi:10.1080/02640828708723925.CrossRefGoogle Scholar
Atilola, O., Tomko, M. & Linsey, J. S. 2015 The effects of representation on idea generation and design fixation: a study comparing sketches and function trees. Design Studies 42, 110136.CrossRefGoogle Scholar
Bao, Q., Faas, D. & Yang, M. 2016 Interplay of sketching & prototyping in early stage product design. In The Fourth International Conference on Design Creativity (ed. Linsey, J. S., Yang, M. C. & Nagai, Y.). Cambridge University Press.Google Scholar
Bao, Q., Faas, D. & Yang, M. 2018 Interplay of sketching & prototyping in early stage product design. International Journal of Design Creativity and Innovation 0349, 123.Google Scholar
Barclift, M., Simpson, T. W., Nusiner, M. A. & Miller, S. 2017 An investigation into the driving factors of creativity. In Proceedings of the ASME 2017 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference, pp. 114. American Society of Mechanical Engineers.Google Scholar
Berthelsdorf, F. A. & Stone, R. B. 2017 Creativity of concept ideation as affected by team personality. In Proceedings of the ASME Design Engineering Technical Conference (Vol. 7). American Society of Mechanical Engineers; doi:10.1115/DETC2017-67974.Google Scholar
Blessing, L. T. M. & Chakrabarti, A. 2009 DRM, A Design Research Methodology. Springer.CrossRefGoogle Scholar
Boden, M. A. 2004 The Creative Mind: Myths and Mechanism (2nd ed.). Routledge; doi:10.4324/9780203508527.CrossRefGoogle Scholar
Boedhoe, R. & Badke-Schaub, P. 2017 Can visual facilitation beat verbal facilitation? In Proceedings of the International Conference on Engineering Design, ICED (Vol. 8), pp. 449457. Design Society.Google Scholar
Borgianni, Y., Maccioni, L., Fiorineschi, L. & Rotini, F. 2020 Forms of stimuli and their effects on idea generation in terms of creativity metrics and non-obviousness. International Journal of Design Creativity and Innovation 8 (3), 147164.CrossRefGoogle Scholar
Brown, D. C. 2014Problems with the calculation of novelty metrics.” In Proceedings of the 6th International Conference on Design Computing and Cognition. Cambridge University Press, online document (downloadable on February 16th 2021) http://web.cs.wpi.edu/~dcb/Papers/DCC14/DCC14-Brown-Novelty-workshop.pdf.Google Scholar
Camburn, B., He, Y., Raviselvam, S., Luo, J. & Wood, K. 2019Evaluating crowdsourced design concepts with machine learning.” Proceedings of the ASME Design Engineering Technical Conference (Vol. 7). American Society of Mechanical Engineers; doi:10.1115/DETC2019-97285.Google Scholar
Cascini, G., Fiorineschi, L. & Rotini, F. 2018Investigating on the re-use of conceptual design representations.” In International Design Conference - Design 2018, Dubrovnik - Croatia, pp. 10091020. Design Society.Google Scholar
Cascini, G., Fiorineschi, L. & Rotini, F. 2020 Impact of design representations on creativity of design outcomes. Journal of Integrated Design and Process Science 23 (2), 3160.CrossRefGoogle Scholar
Chan, J., Fu, K., Schunn, C., Cagan, J., Wood, K. & Kotovsky, K. 2011a On the benefits and pitfalls of analogies for innovative design: ideation performance based on analogical distance, commonness, and modality of examples. Journal of Mechanical Design 133 (8), 081004.CrossRefGoogle Scholar
Chan, J., Fu, K., Schunn, C., Cagan, J., Wood, K. & Kotovsky, K. 2011bOn the effective use of design-by-analogy: the influences of analogical distance and commonness of analogous designs on ideation performance. In ICED 11 - 18th International Conference on Engineering Design - Impacting Society Through Engineering Design (Vol. 7), pp. 8596. Design Society.Google Scholar
Chandrasegaran, S. K., Ramani, K., Sriram, R. D., Horváth, I., Bernard, A., Harik, R. F. & Gao, W. 2013 The evolution, challenges, and future of knowledge representation in product design systems. Computer-Aided Design 45 (2), 204228.CrossRefGoogle Scholar
Charyton, C. & Merrill, J. A. 2009 Assessing general creativity and creative engineering design in first year engineering students. Journal of Engineering Education 98 (2), 145156.CrossRefGoogle Scholar
Choo, P. K., Lou, Z. N., Koo, B., Camburn, B. A., Grey, F. & Wood, K. L. 2014Ideation methods: a first study on measureed outcomes with personality type.” In Proceedings of the ASME 2014 International Design Engineering Technical Conferences & Computers and Information in Engineering Conference IDETC/CIE 2014 August 17–20, 2014, Buffalo, New York, USA, pp. 112. American Society of Mechanical Engineers.Google Scholar
Chulvi, V., Agost, M. J., Felip, F. & Gual, J. 2020 Natural elements in the designer’s work environment influence the creativity of their results. Journal of Building Engineering 28, 101033.CrossRefGoogle Scholar
Chusilp, P. & Jin, Y. 2006 Impact of mental iteration on concept generation. Journal of Mechanical Design, Transactions of the ASME 128 (1), 1425.CrossRefGoogle Scholar
Coelho, D. A. & Vieira, F. L. 2018 The effect of previous group interaction on individual ideation novelty and variety. International Journal of Design Creativity and Innovation 6 (1–2), 8092.CrossRefGoogle Scholar
Cohen, J. 1960 A coefficient of agreement for nominal scales. Educational and Psychological Measurement 20, 3746.CrossRefGoogle Scholar
Crismond, D. P. & Adams, R. S. 2012 The informed design teaching and learning matrix. Journal of Engineering Education 101 (4), 738797.CrossRefGoogle Scholar
Cronbach, L. J. 1951 Coefficient alpha and the internal structure of tests. Psychometrika 16 (3), 297334.CrossRefGoogle Scholar
Cuellar, E., Lutz, B. D., Trageser, D. & Cruz-Lozano, R. 2020Exploring the influence of gender composition and activity structure on engineering teams’ ideation effectiveness.” In ASEE Annual Conference and Exposition, Conference Proceedings, Vol. 2020-June. American Society for Engineering Education; doi:10.18260/1-2--34649.Google Scholar
Curtis, S. K., Mattson, C. A., Hancock, B. J. & Lewis, P. K. 2012Multiobjective optimization formulation.” In 53rd AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics and Materials Conference, Honolulu, Hawaii, pp. 115. AIAA.Google Scholar
Curtis, S. K., Mattson, C. A., Hancock, B. J. & Lewis, P. K. 2013 Divergent exploration in design with a dynamic multiobjective optimization formulation. Structural and Multidisciplinary Optimization 47 (5), 645657.CrossRefGoogle Scholar
Deitrick, E., O’Connell, B. & Shapiro, R. B. 2014The discourse of creative problem solving in childhood engineering education. In Proceedings of International Conference of the Learning Sciences, ICLS, Vol. 1, pp. 591598. International Society of the Learning Sciences.Google Scholar
Deo, S. R. & Holtta-Otto, K. 2018Application of external stimuli during ideation: impact of background and inclination based stimuli on novice minds.” In International Conference on Transforming Engineering Education, ICTEE 2017, pp. 112. IEEE.Google Scholar
Dinar, M., Park, Y. S., Shah, J. J. & Langley, P. 2015 Patterns of creative design: predicting ideation from problem formulation. In Proceedings of the ASME Design Engineering Technical Conference, Vol. 7, pp. 110. American Society of Mechanical Engineers.Google Scholar
Doboli, A. & Umbarkar, A. 2014 The role of precedents in increasing creativity during iterative design of electronic embedded systems. Design Studies 35 (3), 298326.CrossRefGoogle Scholar
Doheim, R. M. & Yusof, N. 2020 Creativity in architecture design studio. assessing students’ and instructors’ perception. Journal of Cleaner Production 249, 111.CrossRefGoogle Scholar
Duflou, J. R. & Verhaegen, P. A. 2011 Systematic innovation through patent based product aspect analysis. CIRP Annals - Manufacturing Technology 60 (1), 203206.CrossRefGoogle Scholar
Dumas, D., Schmidt, L. C. & Alexander, P. A. 2016 Predicting creative problem solving in engineering design. Thinking Skills and Creativity 21, 5066; doi:10.1016/j.tsc.2016.05.002.CrossRefGoogle Scholar
Dunnigan, K. A., Dunford, A. & Bringardner, J. 2020From cornerstone to capstone: students’ design thinking and problem solving.” In ASEE Annual Conference and Exposition, Conference Proceedings, Vol. 2020-June. American Society for Engineering Education; doi:10.18260/1-2--34693.Google Scholar
Durand, F., Helms, M. E., Tsenn, J., McTigue, E., McAdams, D. A. & Linsey, J. S. 2015 Teaching students to innovate: evaluating methods for bioinspired design and their impact on design self efficacy. In Proceedings of the ASME Design Engineering Technical Conference, Vol. 7, pp. 114. American Society of Mechanical Engineers.Google Scholar
Dym, C. L., Agogino, A. M., Eris, O., Frey, D. D. & Leifer, L. J. 2005 Engineering design thinking, teaching, and learning. Journal of Engineering Education 94 (1), 103120.CrossRefGoogle Scholar
Faas, D., Bao, Q, Frey, D. D. & Yang, M. C. 2014 The influence of immersion and presence in early stage engineering designing and building. Artificial Intelligence for Engineering Design, Analysis and Manufacturing: AIEDAM 28 (2), 139151.CrossRefGoogle Scholar
Faas, D. & Gong, S. 2016 Improving novelty and quality during introductory mechanical design competitions. In Proceedings of the ASME Design Engineering Technical Conference, Vol. 3, pp. 19. American Society of Mechanical Engineers.Google Scholar
Ferent, C. & Doboli, A. 2011 Measuring the uniqueness and variety of analog circuit design features. Integration, the VLSI Journal 44 (1), 3950.CrossRefGoogle Scholar
Filippi, S. & Barattin, D. 2016 Definition and quantification of innovation in interaction. International Journal of Design Creativity and Innovation 4 (3–4), 119143.CrossRefGoogle Scholar
Fillingim, K. B., Shapiro, H., Reichling, C. J. & Fu, K. 2021 Effect of physical activity through virtual reality on design creativity. Artificial Intelligence for Engineering Design, Analysis and Manufacturing: AIEDAM 35 (1), 99115.CrossRefGoogle Scholar
Fink, A. 2014 Conducting Research Literature Reviews: From the Internet to Paper. Sage Publications Inc; doi:10.1002/nha3.10270.Google Scholar
Fiorineschi, L., Frillici, F. S. & Rotini, F. 2018aIssues related to missing attributes in a-posteriori novelty assessments.” In International Design Conference - Design 2018, Dubrovnik - Croatia, pp. 10671078. Design Society.Google Scholar
Fiorineschi, L., Frillici, F. S. & Rotini, F. 2018bA-posteriori novelty assessments for sequential design sessions.” In Proceedings of International Design Conference, DESIGN; doi:10.21278/idc.2018.0119.Google Scholar
Fiorineschi, L., Frillici, F. S. & Rotini, F. 2019aThree-dimensional approach for assessing uncommonness of ideas.” In Proceedings of the Design Society: International Conference on Engineering Design, Delft - The Netherlands, pp. 229238. Cambridge University Press.Google Scholar
Fiorineschi, L., Frillici, F. S. & Rotini, F. 2019bOrienting through the variants of the shah a-posteriori novelty metric. In Proceedings of the Design Society: International Conference on Engineering Design, Delft - The Netherlands, pp. 23172326.Google Scholar
Fiorineschi, L., Frillici, F. S. & Rotini, F. 2020a Impact of missing attributes on a posteriori novelty assessments. Research in Engineering Design 31, 221234.CrossRefGoogle Scholar
Fiorineschi, L., Frillici, F. S. & Rotini, F. 2020b Subjectivity of novelty metrics based on idea decomposition. International Journal of Design Creativity and Innovation 8 (223–239), 117.CrossRefGoogle Scholar
Fiorineschi, L., Frillici, F. S. & Rotini, F. 2021 Refined metric for a-posteriori novelty assessments. Journal of Engineering Design 33 (1), 3963.CrossRefGoogle Scholar
Fiorineschi, L., Frillici, F. S., Rotini, F. & Tomassini, M. 2018 Exploiting TRIZ tools for enhancing systematic conceptual design activities. Journal of Engineering Design 29 (6), 259290.CrossRefGoogle Scholar
Fiorineschi, L. & Rotini, F. 2019 A-posteriori novelty metrics based on idea decomposition. International Journal of Design Sciences and Technology 23 (2), 187209.Google Scholar
Fiorineschi, L. & Rotini, F. 2020 Orienting through the variety of novelty metrics. International Journal of Design Sciences and Technology 24 (1), 83101.Google Scholar
Fiorineschi, L. & Rotini, F. 2021 Novelty metrics in engineering design. Journal of Engineering Design 32, 590620; doi:10.1080/09544828.2021.1928024.CrossRefGoogle Scholar
Fu, K., Kotovsky, K. & Chan, J. 2012The meaning of ‘near’ and ‘far’: the impact of structuring design databases and the effect of distance of analogy on design output.” In Proceedings of the ASME 2012 International Design Engineering Technical Conferences & Computers and Information in Engineering Conference IDETC/CIE 2012, pp. 112. ASME.Google Scholar
Fu, K., Murphy, J., Yang, M., Otto, K., Jensen, D. & Wood, K. 2014 Design-by-analogy: experimental evaluation of a functional analogy search methodology for concept generation improvement. Research in Engineering Design 26 (1), 7795.CrossRefGoogle Scholar
Genco, N., Hölttä-Otto, K. & Seepersad, C. C. 2010An experimental investigation of the innovation capabilities of undergraduate engineering students.” In ASEE Annual Conference and Exposition, Conference Proceedings, Vol. 101, Louisville, Kentucky, pp. 6081. American Society for Engineering Education.Google Scholar
Glier, M. W., Schmidt, S. R., Linsey, J. S. & McAdams, D. A. 2011 Distributed ideation: idea generation in distributed capstone engineering design teams. International Journal of Engineering Education 27 (6), 12811294.Google Scholar
Gonçalves, M., Cardoso, C. & Badke-Schaub, P. 2014 What inspires designers? Preferences on inspirational approaches during idea generation. Design Studies 35 (1), 2953.CrossRefGoogle Scholar
Gosnell, C. A. & Miller, S. R. 2015 But is it creative? Delineating the impact of expertise and concept ratings on creative concept selection. Journal of Mechanical Design 138 (2), 111.Google Scholar
Gyory, J. T., Cagan, J. & Kotovsky, K. 2018 Should teams collaborate during conceptual engineering design? An experimental study. In Proceedings of the ASME 2018 International Design Engineering Technical Conferences & Computers and Information in Engineering Conference IDETC2018, pp. 18. ASME.Google Scholar
Gyory, J. T., Cagan, J. & Kotovsky, K. 2019 Are you better off alone? Mitigating the underperformance of engineering teams during conceptual design through adaptive process management. Research in Engineering Design 30 (1), 85102.CrossRefGoogle Scholar
Han, J., Shi, F., Chen, L. & Childs, P. R. N. 2018 The combinator-a computer-based tool for creative idea generation based on a simulation approach. Design Science 4, 134.CrossRefGoogle Scholar
Han, J., Shi, F. & Childs, P. R. N. 2016 The combinator: a computer-based tool for idea generation. In Proceedings of International Design Conference, DESIGN, DS 84, pp. 639648.Google Scholar
Hay, L., Duffy, A. & Grealy, M. 2019 The novelty perspectives framework: a new conceptualisation of novelty for cognitive design studies. In Proceedings of the Design Society: International Conference on Engineering Design, Vol. 1, pp. 389398. Cambridge University Press.Google Scholar
Hayes, A. F. & Krippendorff, K. 2007 Answering the call for a standard reliability measure for coding data. Communication Methods and Measures 1 (1), 7789.CrossRefGoogle Scholar
Henderson, D., Booth, T., Jablokow, K. & Sonalkar, N. 2020 Best fits and dark horses: can design teams tell the difference? In Proceedings of the ASME Design Engineering Technical Conference, Vol. 3, pp. 111. ASME.Google Scholar
Hennessey, B. A., Amabile, T. M. & Mueller, J. S. 2011 Consensual Assessment, Encyclopedia of Creativity (2nd ed., Vol. 1). Academic Press; doi:10.1016/b978-0-12-375038-9.00046-7.Google Scholar
Hernandez, N. V., Kremer, G., Linsey, J. & Schmidt, L. 2010Systematic ideation curriculum effectiveness investigation & deployment to enhance design learning.” In ASEE Annual Conference and Exposition, Conference Proceedings; doi:10.18260/1-2--16740. ASEE.Google Scholar
Hua, M., Han, J., Ma, X. & Childs, P. 2019. “Exploring the effect of combinational pictorial stimuli on creative design performance.” In Proceedings of the International Conference on Engineering Design, ICED, Vol. 2019-August, pp. 17631772. Cambridge University Press.Google Scholar
Hwang, D., Lauff, C. A., Perez, K. B., Camburn, B. A. & Wood, K. L. 2020 Comparing the impacts of design principles for additive manufacturing on student and experienced designers. International Journal of Engineering Education 36 (6), 18621876.Google Scholar
Jagtap, S. 2019 Design creativity: refined method for novelty assessment. International Journal of Design Creativity and Innovation 7 (1–2), 99115.CrossRefGoogle Scholar
Jagtap, S., Andreas, L., Viktor, H., Elin, O. & Anders, W. 2015 Interdependency between average novelty, individual average novelty, and variety. International Journal of Design Creativity and Innovation 3 (1), 4360.CrossRefGoogle Scholar
Jagtap, S., Larsson, A., Hiort, V., Olander, E. & Warell, A. 2012 Ideation metrics: interdependency between average novelty and variety. In Proceedings of International Design Conference, DESIGN DS 70, pp. 18811892. Cambridge University Press.Google Scholar
Jansson, D. G. & Smith, S. M. 1991 Design fixation. Design Studies 12 (1), 311.CrossRefGoogle Scholar
Jia, L., Becattini, N., Cascini, G. & Tan, R. 2020 Testing ideation performance on a large set of designers: effects of analogical distance. International Journal of Design Creativity and Innovation 8 (1), 3145.CrossRefGoogle Scholar
Johnson, T. A., Caldwell, B. W., Cheeley, A. & Green, M. G. 2016Comparison and extension of novelty metrics for problem-solving tasks.” In Proceedings of the ASME 2016 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference IDETC/CIE 2016, Charlotte, North Carolina; doi:10.1115/DETC2016-60319. ASME.Google Scholar
Kershaw, T. C., Bhowmick, S., Seepersad, C. C. & Hölttä-Otto, K. 2019 A decision tree based methodology for evaluating creativity in engineering design. Frontiers in Psychology 10, 119.CrossRefGoogle ScholarPubMed
Kim, J. W., McAdams, D. A. & Linsey, J. 2015Helping students to find biological inspiration: impact of valuableness and presentation format.” In Proceedings - Frontiers in Education Conference, FIE, Vol. 2015-February, pp. 257262. Institute of Electrical and Electronics Engineers Inc.Google Scholar
Kim, S. H. & Zimmerman, H. T. 2019Understanding the practices and the products of creativity: making and tinkering family program at informal learning environments.” In Proceedings of the 18th ACM International Conference on Interaction Design and Children, IDC 2019, pp. 246252. Association for Computing Machinery.Google Scholar
Kulhanek, A. J., Irgens, G. A., Swiecki, Z. L., Chesler, N. C., Shaffer, D. W. & Bodnar, C. A. 2017Assessing the effectiveness of Shah’s innovation metrics for measuring innovative design within a virtual design space.” In ASEE Annual Conference and Exposition, Conference Proceedings, Vol. 2017-June; doi:10.18260/1-2--27624. ASEE.Google Scholar
Kurtoglu, T., Campbell, M. I. & Linsey, J. S. 2009 An experimental study on the effects of a computational design tool on concept generation. Design Studies 30 (6), 676703.CrossRefGoogle Scholar
Levy, B., Hilton, E., Tomko, M. & Linsey, J. 2017Investigating problem similarity through study of between-subject and within-subject experiments.” In Proceedings of the ASME 2017 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference IDETC/CIE 2017 August, Cleveland, Ohio, USA, pp. 112. ASME.Google Scholar
Linsey, J. S., Clauss, E. F., Kurtoglu, T., Murphy, J. T., Wood, K. L. & Markman, A. B. 2011 An experimental study of group idea generation techniques: understanding the roles of idea representation and viewing methods. Journal of Mechanical Design 133 (3), 115.CrossRefGoogle Scholar
Linsey, J.S., Tseng, I., Fu, K., Cagan, J., Wood, K. L. & Schunn, C. 2010 A study of design fixation, its mitigation and perception in engineering design faculty. Journal of Mechanical Design 132, 112.CrossRefGoogle Scholar
Liu, A. & Lu, S. C. Y. 2013 Impacts of synthesis reasoning on ideation effectiveness in conceptual design. Journal of Mechanical Design, Transactions of the ASME 135 (4), 061009; doi:10.1115/1.4024086.CrossRefGoogle Scholar
Liu, A. & Lu, S. Y. 2014Reinforcing a ‘design thinking’ course by restructuring student-instructor interactions.” In ASEE Annual Conference and Exposition, Conference Proceedings; doi:10.18260/1-2--22968. ASEE.Google Scholar
Liu, A. & Lu, S. C. Y. 2016 A crowdsourcing design framework for concept generation. CIRP Annals - Manufacturing Technology, CIRP 65 (1), 177180.CrossRefGoogle Scholar
Luo, S., Bian, Z. & Hu, Y. 2022 How can biological shapes inspire design activity in closed domains? International Journal of Technology and Design Education 32 (1), 479505; doi:10.1007/s10798-020-09593-y.CrossRefGoogle Scholar
Miller, S. R., Hunter, S. T., Ahmed, F., Starkey, E. & Fuge, M. 2020How should we measure creativity in design studies? A comparison of social science and engineering approaches.” In Proceedings of ASME 2020 International Design Engineering Technical Conferences & and Computers and Information in Engineering Conference IDETC/CIE2019, Missouri, pp. 2530. American Society of Mechanical Engineers.Google Scholar
Milojevic, H., Girardello, A., Zhang, Z. & Jin, Y. 2016Influence of thinking style on design creativity.” In 4th International Conference on Design Creativity, ICDC 2016, No. 4th ICDC, pp. 18. The Design Society.Google Scholar
Moore, D., Sauder, J. & Jin, Y. 2014 A dual-process analysis of design idea generation. In Proceedings of the ASME Design Engineering Technical Conference, Vol. 7, pp. 111. American Society of Mechanical Engineers.Google Scholar
Moreno, D. P., Blessing, L. T., Yang, M. C., Hernández, A. A. & Wood, K. L. 2016 Overcoming design fixation: design by analogy studies and nonintuitive findings. Artificial Intelligence for Engineering Design, Analysis and Manufacturing: AIEDAM 30 (2), 185199.CrossRefGoogle Scholar
Moreno, D. P., Hernández, A. A., Yang, M. C., Otto, K. N., Hölttä-Otto, K., Linsey, J. S., Wood, K. L. & Linden, A. 2014 Fundamental studies in design-by-analogy: a focus on domain-knowledge experts and applications to transactional design problems. Design Studies 35 (3), 232272.CrossRefGoogle Scholar
Nelson, B. A., Wilson, J. O., Rosen, D. & Yen, J. 2009 Refined metrics for measuring ideation effectiveness. Design Studies 30 (6), 737743.CrossRefGoogle Scholar
Nelson, B., Wilson, J. & Yen, J. 2009 A study of biologically-inspired design as a context for enhancing student innovation. In Proceedings - Frontiers in Education Conference, FIE, pp. 15. IEEE.Google Scholar
Okoli, C. & Schabram, K. 2010 A guide to conducting a systematic literature review of information systems research. Working Papers on Information Systems 10 (26), 151.Google Scholar
Okudan, G. E., Chiu, M. C., Lin, C. Y., Schmidt, L. C., Hernandez, N. V. & Linsey, J. 2010A pilot exploration of systematic ideation methods and tools on design learning.” In 2010 9th International Conference on Information Technology Based Higher Education and Training, ITHET 2010, pp. 102107. IEEE.Google Scholar
Okudan, G. E., Ogot, M. & Shirwaiker, R. A. 2006An investigation on the effectiveness of design ideation using Triz.” In Proceedings of IDETC 2006 ASME 2006 International Design Engineering Technical Conferences & Computers and Information in Engineering Conference, pp. 19. ASME.Google Scholar
Okudan, G. E., Schmidt, L. C., Hernandez, N. V., Jablokow, K. & Lin, C. Y. 2012Questioning the impact of personality on design outcomes: should we take it into account?” In Proceedings of the ASME Design Engineering Technical Conference, Vol. 7, December 2015, pp. 95102. ASME.Google Scholar
Oman, S. K. & Tumer, I. Y. 2009The potential of creativity metrics for mechanical engineering concept design.” In International Conference on Design Design, ICED 2009, Stanford, CA, USA, pp. 145156. The Design Society.Google Scholar
Oman, S. K. & Tumer, I. Y. 2010Assessing creativity and innovation at the concept generation stage in engineering design: a classroom experiment.” In Proceedings of the ASME 2010 International Design Engineering Technical Conferences & Computers and Information in Engineering Conference IDETC/CIE 2010, pp. 19. American Society of Mechanical Engineers.Google Scholar
Oman, S., Tumer, I. Y. & Stone, R. 2014Reducing the subjectivity in creativity assessment.” In Proceedings of the ASME 2014 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. Volume 7: 2nd Biennial International Conference on Dynamics for Design; 26th International Conference on Design Theory and Methodology. Buffalo, New York, USA, pp. 111. American Society of Mechanical Engineers.Google Scholar
Pace, P. W., Wood, K. L., Wood, J. J. & Jensen, D. D. 2011Studying ideation in engineering design education: application to highly mobile robots.” In ASEE Annual Conference and Exposition, Conference Proceedings. American Society for Engineering Education.Google Scholar
Pahl, G., Beitz, W., Feldhusen, J. & Grote, K. H. 2007 Engineering Design (3rd Ed.). Springer-Verlag; doi:10.1007/978-1-84628-319-2.CrossRefGoogle Scholar
Perez, B., Hilburn, S., Jensen, D. & Wood, K. L. 2019 Design principle-based stimuli for improving creativity during ideation. Proceedings of the Institution of Mechanical Engineers, Part C: Journal of Mechanical Engineering Science 233 (2), 493503.Google Scholar
Perttula, M. & Sipilä, P. 2007 The idea exposure paradigm in design idea generation. Journal of Engineering Design 18 (1), 93102.CrossRefGoogle Scholar
Pucha, R., Levy, B., Linsey, J. S., Newton, S. H., Alemdar, M. & Utschig, T. 2017Assessing concept generation intervention strategies for creativity using design problems in a freshman engineering graphics course.” In ASEE Annual Conference and Exposition, Conference Proceedings, Vol. 2017-June. American Society for Engineering Education; doi:10.18260/1-2--27619.Google Scholar
Ranjan, B. S. C. & Chakrabarti, A. 2015Assessment of novelty and quantity across design phases.” In ICDC 2015 - Proceedings of the 3rd International Conference on Design Creativity, pp. 303310. The Design Society.Google Scholar
Raviselvam, S., Hölttä-Otto, K. & Wood, K. L. 2016User extreme conditions to enhance designer empathy and creativity: applications using visual impairment.” In ASME 2016 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference, Charlotte, North Carolina; doi:10.1115/detc2016-59602.Google Scholar
Raviselvam, S., Sanei, R., Blessing, L., Hölttä-otto, K. & Wood, K. L. 2017Demographic factors and their influence on designer creativity and emphaty evoked through user extreme conditions.” In Proceedings of the ASME 2017 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference, pp. 110. American Society of Mechanical Engineers. ASME.Google Scholar
Restrepo, J., Ríos-Zapata, D., Mejía-Gutiérrez, R., Nadeau, J. P. & Pailhès, J. 2018 Experiences in implementing design heuristics for innovation in product design. International Journal on Interactive Design and Manufacturing 12 (3), 777786.CrossRefGoogle Scholar
Rizzuti, S. & De Napoli, L. 2020Product design education in mechanical and management engineering masters degree programmes. analogies and differences.” In Proceedings of the 22nd International Conference on Engineering and Product Design Education, E and PDE 2020, September; doi:10.35199/epde.2020.66. The Design Society.Google Scholar
Sarkar, P. & Chakrabarti, A. 2007Development of a method for assessing design creativity.” In International Conference On Design, ICED 2007, pp. 112. The Design Society.Google Scholar
Sarkar, P. & Chakrabarti, A. 2011 Assessing design creativity. Design Studies 32 (4), 348383.CrossRefGoogle Scholar
Sauder, J., Lian, E. & Yan, J. 2013The effect of collaborative stimulation on design novelty.” In Proceedings of the ASME 2013 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference IDETC/CIE 2013, Portland, Oregon, USA. American Society of Mechanical Engineers.Google Scholar
Schmidt, L. C. & Hernandez, N. V. 2010Pilot of systematic ideation study with lesson learned.” In Proceedings of the ASME 2010 International Design Engineering Technical Conferences & Computers and Information in Engineering Conference IDETC/CIE 2010, pp. 19. ASME.Google Scholar
Schmidt, L. C., Hernandez, N. V. & Ruocco, A. L. 2012 Research on encouraging sketching in engineering design. Artificial Intelligence for Engineering Design, Analysis and Manufacturing: AIEDAM 26 (3), 303315.CrossRefGoogle Scholar
Shah, J. J., Kulkarni, S. V. & Vargas-Hernandez, N. 2000Evaluation of idea generation methods for conceptual design: effectiveness metrics and design. Journal of Mechanical Design 122, 377384.CrossRefGoogle Scholar
Shah, J. J., Smith, S. M. & Vargas-Hernandez, N. 2006Multilevel aligned empirical studies of ideation: final results. In Proceedings of the ASME 2006 International Design Engineering Technical Conferences & Computers and Information in Engineering Conference, Philadelphia, Pennsylvania. American Society of Mechanical Engineers.Google Scholar
Shah, J. J., Vargas-Hernandez, N. & Smith, S. M. 2003a Metrics for measuring ideation effectiveness. Design Studies 24 (2), 111134.CrossRefGoogle Scholar
Shah, J. J., Vargas-Hernandez, N. & Smith, S. M. 2003bEmpirical studies of design ideation: alignment of design experiments with lab experiments.” In Proceedings of DETC 2003: ASME 2003 Design Engineering Technical Conferences and Computers and Information in Engineering Conference, Chicago, Illinois. American Society of Mechanical Engineers; doi:10.1115/DETC2003/DTM-48679.Google Scholar
Shireen, N., Erhan, H. I., Sánchez, R., Popovic, J., Woodbury, R. & Riecke, B. E. 2011Design space exploration in parametric systems: analyzing effects of goal specificity and method specificity on design solutions.” In C and C 2011 - Proceedings of the 8th ACM Conference on Creativity and Cognition, pp. 249258. Association for Computing Machinery, Inc.Google Scholar
Sinha, S., Chen, H., Meisel, N. A. & Miller, S. R. 2017 Does designing for additive manufacturing help us be more. In ASME 2017 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference, Vol. 3, pp. 112. American Society of Mechanical Engineers.Google Scholar
Sluis-Thiescheffer, W., Bekker, T., Eggen, B., Vermeeren, A. & De Ridder, H. 2016 Measuring and comparing novelty for design solutions generated by young children through different design methods. Design Studies 43, 4873.CrossRefGoogle Scholar
Sonalkar, N., Mabogunje, A., Pai, G., Krishnan, A. & Roth, B. 2016 Diagnostics for design thinking teams. In Design Thinking Research: Making Design Thinking Foundational (ed. Plattner, H., Meinel, C. & Leifer, L.), pp. 3551. Springer.CrossRefGoogle Scholar
Srivathsavai, R., Genco, N., Hölttä-otto, K. & Seepersad, C. C. 2010Study of existing metrics used in measurement of ideation effectiveness.” In Proceedings of the ASME 2010 International Design Engineering Technical Conferences & Computers and Information in Engineering Conference IDETC/CIE 2010, Montreal - Canada. ASME.Google Scholar
Starkey, E. M., Gosnell, C. A. & Miller, S. R. 2015Implementing creativity evaluation tools into the concept selection process in engineering education.” In Proceedings of the ASME 2015 International Design Engineering Technical Conferences & Computers and Information in Engineering Conference IDETC/CIE 2015 August 2–5, 2015, Boston, Massachusetts, USA, pp. 110. American Society of Mechanical Engineers.Google Scholar
Starkey, E., Toh, C. A. & Miller, S. R. 2016 Abandoning creativity: the evolution of creative ideas in engineering design course projects. Design Studies 47, 4772.CrossRefGoogle Scholar
Sun, L., Xiang, W., Chai, C., Wang, C. & Huang, Q. 2014 Creative segment: a descriptive theory applied to computer-aided sketching. Design Studies 35 (1), 5479.CrossRefGoogle Scholar
Sun, G. & Yao, S. 2012 Investigating the relation between cognitive load and creativity in the conceptual design process. Proceedings of the Human Factors and Ergonomics Society 56, 308312.CrossRefGoogle Scholar
Sun, G., Yao, S. & Carretero, J. A. 2014 Comparing cognitive efficiency of experienced and inexperienced designers in conceptual design processes. Journal of Cognitive Engineering and Decision Making 8 (4), 330351.CrossRefGoogle Scholar
Sun, G., Yao, S. & Carretero, J. A. 2015 A pilot study for investigating factors that affect cognitive load in the conceptual design process. Proceedings of the Human Factors and Ergonomics Society 59, 180184.CrossRefGoogle Scholar
Tan, S. Y., Hölttä-Otto, K. & Anariba, F. 2019 Development and implementation of design-based learning opportunities for students to apply electrochemical principles in a designette. Journal of Chemical Education 96 (2), 256266.CrossRefGoogle Scholar
Toh, C. A. & Miller, S. R. 2013a Visual inspection or product dissection? The impact of designer-product interactions on engineering design creativity. In Proceedings of the ASME 2013 International Design Engineering Technical Conferences & Computers and Information in Engineering Conference, pp. 114. American Society of Mechanical Engineers.Google Scholar
Toh, C. A. & Miller, S. R. 2013b Exploring the utility of product dissection for early-phase idea generation. In Proceedings of the ASME Design Engineering Technical Conference, Vol. 5, pp. 112. American Society of Mechanical Engineers.Google Scholar
Toh, C. A. & Miller, S. R. 2014 The impact of example modality and physical interactions on design creativity. Journal of Mechanical Design, Transactions of the ASME 136 (9), 18.CrossRefGoogle Scholar
Toh, C. A. & Miller, S. R. 2015 How engineering teams select design concepts: a view through the lens of creativity. Design Studies 38, 111138.CrossRefGoogle Scholar
Toh, C. A. & Miller, S. R. 2016a Choosing creativity: the role of individual risk and ambiguity aversion on creative concept selection in engineering design. Research in Engineering Design 27 (3), 195219.CrossRefGoogle Scholar
Toh, C. A. & Miller, S. R. 2016b Creativity in design teams: the influence of personality traits and risk attitudes on creative concept selection. Research in Engineering Design 27 (1), 7389.CrossRefGoogle Scholar
Toh, C. A., Miller, S. R. & Kremer, G. E. O. 2012 The impact of product dissection activities on the novelty of design outcomes. In Proceedings of the ASME 2012, pp. 110. American Society of Mechanical Engineers.Google Scholar
Toh, C. A., Miller, S. R. & Kremer, G. E. O. 2014 The impact of team-based product dissection on design novelty. Journal of Mechanical Design, Transactions of the ASME 136 (4), 4026151; doi:10.1115/1.4026151.CrossRefGoogle Scholar
Tsenn, J., McAdams, D. A. & Linsey, J. S. 2013A comparison of design self-efficacy of mechanical engineering freshmen, sophomores, and seniors.” In ASEE Annual Conference and Exposition, Conference Proceedings. American Society for Engineering Education; doi:10.18260/1-2--19044.Google Scholar
Vandevenne, D., Pieters, T. & Duflou, J. R. 2016 Enhancing novelty with knowledge-based support for biologically-inspired design. Design Studies 46, 152173.CrossRefGoogle Scholar
Vargas-Hernandez, N., Okudan, G. E. & Schmidt, L. C. 2012Effectiveness metrics for ideation: merging genealogy trees and improving novelty metric.” In Proceedings of the ASME 2012 International Design Engineering Technical Conferences & Computers and Information in Engineering Conference IDETC/CIE 2012, Chicago, Illinois. American Society of Mechanical Engineers; doi:10.1115/DETC2012-70295.Google Scholar
Vargas-Hernandez, N., Paso, E., Schmidt, L. C., Park, C., Okudan, G. E. & Pennsylvania, K. 2012Experimental assessment of Triz effectiveness in idea generation.” In ASEE Annual Conference and Exposition, Conference Proceedings, San Antonio, Texas. American Society for Engineering Education.Google Scholar
Vargas-Hernandez, N., Schmidt, L. C. & Okudan, G. E. 2012Systematic Ideation Effectiveness Study of TRIZ.” In Proceedings of the ASME 2012 International Design Engineering Technical Conferences & Computers and Information in Engineering Conference IDETC/CIE 2012, pp. 110. American Society of Mechanical Engineers.Google Scholar
Vargas-Hernandez, N., Schmidt, L. C. & Okudan, G. E. 2013 Systematic ideation effectiveness study of TRIZ. Journal of Mechanical Design 135 (10), 110.Google Scholar
Vargas-Hernandez, N., Shah, J. J. & Smith, S. M. 2010 Understanding design ideation mechanisms through multilevel aligned empirical studies. Design Studies 31 (4), 382410.CrossRefGoogle Scholar
Verhaegen, P. A., Peeters, J., Vandevenne, D., Dewulf, S. & Duflou, J. R. 2011 Effectiveness of the PAnDA ideation tool. Procedia Engineering 9, 6376.CrossRefGoogle Scholar
Verhoeven, D., Bakker, J. and Veugelers, R. 2016 Measuring technological novelty with patent-based indicators. Research Policy 45 (3), 707723.CrossRefGoogle Scholar
Viswanathan, V. K. & Linsey, J. S. 2010 Physical models in idea generation – hindrance or help? In Proceedings of the 22th International Conference on Design Theory and Methodology DTM22, pp. 111. American Society of Mechanical Engineers.Google Scholar
Viswanathan, V. K. & Linsey, J. S. 2013 Role of sunk cost in engineering idea generation: an experimental investigation. Journal of Mechanical Design 135 (12), 121002.CrossRefGoogle Scholar
Weaver, M. B., Bennetts, C. & Caldwell, B. W. 2018 Rate of variety and its relation to creativity factors. In Proceedings of the ASME Design Engineering Technical Conference, Vol. 7, pp. 18. ASME.Google Scholar
Weaver, M.B., Caldwell, B.W. and Sheafer, V. (2019), “Interpreting measures of rarity and novelty: investigating correlations between relative infrequency and perceived ratings.” In Proceedings of the ASME Design Engineering Technical Conference, Vol. 7, Anaheim, CA. American Society of Mechanical Engineers; doi:10.1115/DETC2019-97828.Google Scholar
Weir, J., Lewis, B., Burvill, C. & Field, B. 2005A quantitative study of ideation, visualisation and graphical skills in engineering design.” In International Conference on Engineering Design ICED 05, Melbourne. Design Society.Google Scholar
Wilson, J. O., Rosen, D., Nelson, B. A. & Yen, J. 2010 The effects of biological examples in idea generation. Design Studies 31 (2), 169186.CrossRefGoogle Scholar
Zheng, X., Ritter, S. C. & Miller, S. R. 2018 How concept selection tools impact the development of creative ideas in engineering design education. Journal of Mechanical Design, Transactions of the ASME 140 (5), 111.CrossRefGoogle Scholar
Figure 0

Table 1. List of the investigation targets related to the questions formulated in Subsection 4

Figure 1

Figure 1. Procedure used to perform the literature review described in this paper.

Figure 2

Figure 2. Citation trend for the work of Shah, Vargas-Hernandez & Smith (2003a,b). The figure also reports the portion of those citations from papers actually using an original version of the SVS approach or a modified version.

Figure 3

Table 2. List of documents identified according to Substeps 3.1 and 3.2 in Figure 1

Figure 4

Table 3. List of metrics used in the works that refer to SVS but use different approaches

Figure 5

Figure 3. Percentages of the papers using the two different novelty assessment approaches originally proposed by SVS.

Figure 6

Figure 4. Terms used by the different authors to identify the novelty concept underpinning the assessment performed through the SVS metrics.

Figure 7

Figure 5. Types of application identified for the SVS novelty metrics.

Figure 8

Table 4. Application types identified for the SVS novelty metrics

Figure 9

Figure 6. Papers that use SVS novelty metrics assessment and apply interrater agreement tests.

Figure 10

Figure 7. Percentages of articles that describe rationale for using the original SVS novelty assessment approaches. In particular, the graph shows how many documents do not report, partially report or are not required to report information about the assessment rationale. None of the reviewed papers completely reported the information required to ensure the repeatability of the experiment.

Figure 11

Table 5. Findings from each of the seven research questions introduced in Section 2

Figure 12

Table 6. Implications of the results presented in Table 5

Figure 13

Table A1. Not retrieved articles