1. Introduction
Prototyping is an essential task throughout the design process (Wall, Ulrich, & Flowers Reference Wall, Ulrich and Flowers1992) that facilitates both learning and decision-making (Jensen, Özkil, & Mortensen Reference Jensen, Özkil and Mortensen2016; Lauff et al. Reference Lauff, Kotys-Schwartz and Rentschler2018) and supports designers in elucidating a product’s functional requirements (Ege et al. Reference Ege, Lilleløkken, Auflem and Steinert2020; Kriesi et al. Reference Kriesi, Blindheim, Bjelland and Steinert2016). It allows designers to evaluate specific aspects of eventual designs (Houde & Hill Reference Houde, Hill, Helander, Landauer and Prabhu1997; Buchenau & Suri Reference Buchenau and Suri2000) and to mitigate the risk of encountering severe obstacles later in the development process (Takeuchi & Nonaka Reference Takeuchi and Nonaka1986; Thomke & Reinertsen Reference Thomke and Reinertsen1998; Steinert & Leifer Reference Steinert and Leifer2012).
Despite its importance, prototyping is often portrayed as a one-step process. This is illustrated by misleading phrases like “bang out a quick prototype,” a description that fails to address the actual complexity of prototyping (Christie et al. Reference Christie, Jensen, Buckley, Menefee, Ziegler, Wood and Crawford2012).
Studies have sought to address prototyping’s complexity through the introduction of prototyping strategies (Camburn et al. Reference Camburn, Viswanathan, Linsey, Anderson, Jensen, Crawford, Otto and Wood2017, Reference Camburn, Dunlap, Gurjar, Hamon, Green, Jensen, Crawford, Otto and Wood2015), recommendations for the use of prototypes (Lim, Stolterman, & Tenenberg Reference Lim, Stolterman and Tenenberg2008), including prototyping planners (Lauff, Menold, & Wood Reference Lauff, Menold and Wood2019; Hansen et al. Reference Hansen, Jensen, Özkil and Pacheco2020) and holistic prototyping frameworks (Menold, Jablokow, & Simpson Reference Menold, Jablokow and Simpson2017; Ahmed & Demirel Reference Ahmed and Demirel2020). However, many of these are ad hoc based on pass/fail criteria or on a limited design time frame (Menold et al. Reference Menold, Jablokow and Simpson2017). Further, it has been shown that novice designers still don’t maximise the benefits of prototyping and rarely engage with prototyping best practices (Petrakis, Wodehouse, & Hird Reference Petrakis, Wodehouse and Hird2021), thus highlighting the limitations of current prototyping strategies and frameworks.
Hackathons have been shown to be applicable and useful for organisational innovation (Kollwitz & Dinter Reference Kollwitz and Dinter2019), good platforms for open innovation that can be used for accelerating development (Flores et al. Reference Flores, Golob, Maklin, Herrera, Tucci, Al-Ashaab, Williams, Encinas, Martinez, Zaki, Sosa and Pineda2018). These features are representative of sub-sections of the engineering design process, particularly those associated with the earliest stages of design, often characterised by a high degree of uncertainty, rapid learning and the need for fast iteration (Steinert & Leifer Reference Steinert and Leifer2012). As a result, the study of hackathons can provide a unique lens through which these types of design scenarios - that we will refer to rapid innovation scenarios - can be better understood.
To complement extant prototyping knowledge and improve understanding of design practice, this article characterises the prototyping strategies and practices of design teams during a 4-day hackathon via means of an analysis of a prototyping dataset captured during the International Design Engineering Annual (IDEA) challenge 2021. This analysis aligns prototyping domains (i.e. physical, digital or sketch) and prototypes’ purposes in order to provide recommendations as to which types of prototypes are best suited for given purposes.
The contribution of this article lies in (i) determining the purposes for which physical, digital and sketch prototypes are most appropriate; (ii) understanding prototyping strategies and practices at hackathons; and based on i and ii, (iii) providing recommendations on prototyping best practices at hackathons and similar scenarios.
The remainder of the article is structured as follows. Section 2 presents background information on prototyping, hackathons in design research and measurement of design activity. Section 3 defines the methodology followed. The paper results are presented in Section 4 and post hoc interviews in Section 5, both of which are discussed in Section 6 and the paper concludes in Section 7.
2. Background
To contextualise the article, this section will consider prototyping, hackathons for design research and means of prototyping capture.
2.1. Prototyping
When faced with a design problem, designers must decide what type of prototype is most fitting to solve a given problem. This requires the selection of an appropriate prototyping medium, which involves consideration of what it is, i.e., its domain, and more teleologically about its purpose, i.e., what it is for. To effectively measure and compare prototypes against each other and, importantly, to capture the essence of prototyping in a design context, a taxonomy is required that encompasses the domains and purposes of prototypes. Multiple prototype taxonomies have been suggested. Some focus on the what (Hicks et al. Reference Hicks, Culley, Allen and Mullineux2002; Hannah, Michaelraj, & Summers Reference Hannah, Michaelraj and Summers2008; Lim et al. Reference Lim, Stolterman and Tenenberg2008) of prototypes and others on the why (Lauff, Kotys-Schwartz, & Rentschler Reference Lauff, Kotys-Schwartz and Rentschler2018; Petrakis, Hird, & Wodehouse Reference Petrakis, Hird and Wodehouse2019). Taxonomies, as suggested by Nickerson, Varshney, & Muntermann (Reference Nickerson, Varshney and Muntermann2013), are inherently use-case-specific and often non-transferable from one use case to another, making the selection of a relevant and context-specific taxonomy crucial for the accurate evaluation and understanding of prototypes within this article.
Prototyping domains and their affordances in early phase design
In defining what a prototype is, this section will consider three principle domains of prototypes: digital, physical and sketches. At the highest level, prototypes can be broken down into two categories: digital – comprising 1s and 0s – and physical – comprising atoms.
Physical prototypes, described by Plattner (Reference Plattner2010)) as “anything that takes a physical form,” are purposefully created artefacts that approximate one (or several) features of a product, service or system (Otto & Wood Reference Otto and Wood2001). Fast and iterative physical prototyping has been shown to improve design outcomes (Dow, Heddleston, & Klemmer Reference Dow, Heddleston and Klemmer2009; Neeley et al. Reference Neeley, Lim, Zhu and Yang2013), accentuating the importance of prototyping in design. Schrage (Reference Schrage2004) described the product development process in some organisations as “prototype driven”, where physical prototypes are created and tested for learning. This can lead to important insights being revealed early in a design process and with significantly lower cost than intense digital simulations or external information gathering in terms of time and availability of expert services (Kriesi et al. Reference Kriesi, Blindheim, Bjelland and Steinert2016). Instead of relying on predetermined specifications, Gerstenberg et al. (Reference Gerstenberg, Sjöman, Reime, Abrahamsson, Steinert, Chorianopoulos, Divitini, Hauge, Jaccheri and Malaka2015)) suggest that (physical) prototypes should guide development, where new learning and insights decide subsequent development steps. This addresses the inherent ambiguity and “fuzziness” of early phase design (Leifer & Steinert Reference Leifer and Steinert2011) with the added benefit of designers continuously and consciously reflecting on the outcome (Schön Reference Schön1983).
For simple, specific design tasks, digital prototypes (prototypes created and tested computationally) have been shown to reduce completion time and increase performance (Hamon et al. Reference Hamon, Green, Dunlap, Camburn, Crawford and Jensen2014). It also allows for rapid testing of complex designs, such as strength and fluid simulations. However, it is also argued that most digital prototyping tools are only applicable in more detailed design phases, not the preliminary conceptual phases (Hsu & Liu Reference Hsu and Liu2000). This is caused by early-phase product requirements being imprecise and incomplete and that digital prototypes are limited to the modelled phenomena (that could be imprecise and incomplete) or directly programmed into it (Otto & Wood Reference Otto and Wood2001). Kent et al. (Kent et al. Reference Kent, Snider, Gopsill and Hicks2021) also argues that the learning curve of digital prototyping tools is often large, creating complexity and increased realisation time. This might also increase the sunk cost effect, decreasing the chance of throwing away bad designs.
A third category of prototypes is that of sketches. These can exist in both physical and digital domains and are shown in literature to be considered analogous (Ranscombe & Bissett-Johnson Reference Ranscombe and Bissett-Johnson2017). Sketching has been shown to have an important role in the early phases of design (Ullman, Wood, & Craig Reference Ullman, Wood and Craig1990), being used to capture (Yang & Cham Reference Yang and Cham2006) and express ideas (Tang & Leifer Reference Tang and Leifer1988) and acting as important boundary objects (Henderson Reference Henderson1991). Yang (Reference Yang2009) found no correlation between the number of sketches and design outcome, indicating that an abundance of sketches created does not necessarily yield better designs. Because of the similarities of digital and physical sketching as well as clear differences in their affordances, they are considered as a separate type of prototype.
This study considers physical prototypes as any physical artefact created during the IDEA Challenge, except as a result of drawing processes. This includes prototypes made using manual machining, hand modelling, 3D printing, cardboard/foam modelling etc. Digital prototypes are any prototype created (and tested) on a computer, such as simulations, CAD models, renders and code. Sketches are the result of any drawing process, such as sketches, schematics, hand-drawn diagrams and hand calculations.
Purposes of prototypes
What a prototype is for can be considered to be synonymous with its purpose. In Aristotelian thought, the notion of purpose, or telos,Footnote 1 does not necessarily require deliberation, intention, consciousness or intelligence (Barnes Reference Barnes1984). Leaving aside potential clichés about modern day students, this is an important notion for prototyping and even design in general. Whilst activities in these domains typically are and benefit from being directed and intentional, they also uncover things about the design and solution spaces by accident without planning or direction. For this reason, when considering the purpose of prototypes, it is worth considering both what they were intended for and what they might have facilitated accidentally.
A number of ways of defining the purposes of prototypes are available in the literature. Prototypes can be divergent or convergent, for example, in design space exploration or evaluation (Jensen et al. Reference Jensen, Özkil and Mortensen2016). Prototypes can also be discussed as being filters to explore a design idea or manifestations that externalise a design idea for communication and evaluation (Lim et al. Reference Lim, Stolterman and Tenenberg2008). Ulrich & Eppinger (Reference Ulrich and Eppinger2012)) define four purposes for prototyping as learning, communication, integration and milestones.
Camburn’s definitions of prototyping purposes are (Camburn et al. Reference Camburn, Viswanathan, Linsey, Anderson, Jensen, Crawford, Otto and Wood2017):
-
• Refinement - the process of gradually improving a design;
-
• Communication - the process of sharing information about the design and its potential use within the design team and to users;
-
• Active learning - the process of gaining new knowledge about the design space or relevant phenomena. In this context, active learning applies not only in the educational sense but in terms of advancing designers’ mental or analytical models of phenomenal interactions; and,
-
• Exploration - the process of seeking out new design concepts.
As well as choosing a prototyping purpose, the degree of fidelity, size and functionality need to be addressed when answering design questions. As prototyping is fundamentally a knowledge-generation activity, it is also crucial to consider the type (Real et al. Reference Real, Snider, Goudswaard and Hicks2021), fidelity and accessibility of the knowledge that is to be generated (Goudswaard et al. Reference Goudswaard, Snider, Gopsill, Jones, Harvey and Hicks2021). All of these factors will have a huge impact on cost, efficiency and time use during a design project. It is therefore of great importance to choose the correct way of prototyping, i.e. the correct prototyping strategy.
2.2. Hackathons for design research
The term hackathon is a portmanteau of the words hack and marathon (Briscoe & Mulligan Reference Briscoe and Mulligan2014). They involve opening presentations, followed by targeted and accelerated design episodes in which multiple teams, over periods of between a day and a week, design solutions towards similar or identical briefs. They often feature closing pitches and are judged by a panel to determine a winner (Briscoe & Mulligan Reference Briscoe and Mulligan2014).
Hackathons enable a focused interruption-free workspace, facilitating co-located knowledge exchange and rapid feedback on technical issues, while cultivating team identity and learning opportunities (Pe-Than & Herbsleb Reference Pe-Than and Herbsleb2019). They provide value through the provision of opportunities for people to meet and collaborate to create new links (Briscoe & Mulligan Reference Briscoe and Mulligan2014).
From a design research perspective, although not carried out in a laboratory, hackathons can be considered to share characteristics of studies carried out in laboratory settings (Goudswaard et al. Reference Goudswaard, Kent, Giunta, Gopsill, Snider, Valjak, Christensen, Felton, Ege, Real, Cox, Horvat, Kohtala, Eikevåg, Martinec, Perišić, Steinert and Hicks2022) as defined by Blessing and Chakrabarti (Reference Blessing and Chakrabarti2009). This is due to:
-
• The process of observing: features a pre-determined time, location and duration; is repeatable with different participants; and features minimal interrupts.
-
• The observed process: is self-contained such that data is available and spans the entire design process; is of low complexity (e.g. 10s of parts); allows the individual to be analysed; features a fixed assignment (unless change is introduced by the researcher); permits the determination of causation and correlation; and creates results that may not entirely relate to reality.
While some characteristics are shared with laboratory-based design experiments, hackathons cannot be described as easy to control. While constrained in time and space, hackathon design outputs, strategies and practices employed by participants will vary greatly.
Others have utilized design hackathons to investigate collaborative design tools compared to innovation categories (Lobbe, Bazzaro, & Sagot Reference Lobbe, Bazzaro and Sagot2021). They have been used to facilitate the design process (Artiles & Wallace Reference Artiles and Wallace2013), teach design (Fowler Reference Fowler2016) and study how education level affects hackathon outcomes (Legardeur et al. Reference Legardeur, Masson, Gardoni and Pimapunsri2020). The hackathon presents itself as a unique and realistic setting for studying design activities (Flus & Hurst Reference Flus and Hurst2021a) and has been proposed as a platform for running design studies (Ege et al. Reference Ege, Goudswaard, Nesheim, Eikevåg, Bjelland, Christensen, Ballantyne, Su, Cox, Timperley, Aeddula, Machchhar, Ruvald, Li, Figueiredo, Deo, Horvat, Čeh, Šklebar, Miler, Gopsill, Hicks and Steinert2023). However, few studies have gone beyond discussing hackathon outcomes rather than the design activities taking place during these events (Olesen, Hansen, & Halskov Reference Olesen, Hansen and Halskov2018).
Characteristics of design practice in hackathons include designers having less time to think as time-expensive fabrication is required, more parallelisation of activities, and the application of agile design methods are applied, generally featuring iteration (Flus & Hurst Reference Flus and Hurst2021b); they feature interdisciplinary teams, involve decision-making under pressure, enable overcoming of organisational challenges (Frey & Luks Reference Frey and Luks2016) and require the creation of perceptible prototypes (Falk & Young Reference Falk and Young2022). They are useful for organisational innovation (Kollwitz & Dinter Reference Kollwitz and Dinter2019) and can enable the bringing together of diverse groups to tackle social issues (Hope et al. Reference Hope, Michelson, D’Ignazio, Roberts, Zuckerman, Hoy and Krontiris2019). “Hackathon participation involves high demands for fast idea generation, decision-making, and prototyping, which ideally ends in a functioning and novel prototype” (Flores et al. Reference Flores, Golob, Maklin, Herrera, Tucci, Al-Ashaab, Williams, Encinas, Martinez, Zaki, Sosa and Pineda2018).
These characteristics are not indifferent from a range of characteristics of real design processes – particularly those associated with rapid innovation in the earliest stages of design, often characterized by a high degree of uncertainty (Steinert & Leifer Reference Steinert and Leifer2012) and the need for fast iterations (Kriesi et al. Reference Kriesi, Blindheim, Bjelland and Steinert2016). Moreover, simultaneous and parallel prototyping in multiple disciplines, abductive learning from rapid learning cycles and involving diverse groups have shown to be beneficial in early-stage design (Gerstenberg et al. Reference Gerstenberg, Sjöman, Reime, Abrahamsson, Steinert, Chorianopoulos, Divitini, Hauge, Jaccheri and Malaka2015). Because of these shared characteristics, unique opportunities for studying design activity at the early stages of design are afforded by hackathons in particular due to their time pressures and reduced incubation times and varying levels of design expertise of participants (Flus & Hurst Reference Flus and Hurst2021a).
In addition to aligned qualities of hackathons to early phases of design, similarities can be drawn with design processes such as Tiger Teams and normal tendering processes. The term Tiger Teams was coined in a 1964 paper “a team of undomesticated and uninhibited technical specialists, selected for their experience, energy, and imagination, and assigned to track down relentlessly every possible source of failure in a spacecraft subsystem or simulation” (Dempsey et al. Reference Dempsey, Davis, Crossfield and Williams1964). They are used more broadly to tackle a wide range of technical issues across multiple disciplines (Bratburd Reference Bratburd2021). Similarities in hackathons can be seen in the formation of multi-disciplinary teams to rapidly solve complex problems. The normal tendering process also has parallels with hackathons where multiple teams prepare bids in short timescales in order to compete to win a contract. The study of the design process in hackathons can therefore provide insights into these aforementioned real design contexts.
2.3. Measuring design activity through prototype capture
A variety of techniques can be used for measuring design activity. These include protocol studies where designer’s subjective verbal reports of a design activity are analysed to understand their cognitive processes (Ericsson & Simon Reference Ericsson and Simon1982) or logbook studies (McAlpine et al. Reference McAlpine, Hicks, Huet and Culley2006; McAlpine, Cash, & Hicks Reference McAlpine, Cash and Hicks2017). These are however both unsuitable for design activity such as that observed at hackathons - the former due to the large amount of data (Goldshmidt & Weil Reference Goldshmidt and Weil1998) and the large number of participants in difficult-to-control settings (Flus & Hurst Reference Flus and Hurst2021a) and the latter due to lack of documentation in the rapidly evolving design process.
As an alternative means, a range of both tangible and intangible outputs are expected from hackathons that include new prototypes (Angarita & Nolte Reference Angarita, Nolte, Nolte, Alvarez, Hishiyama, Chounta, Rodríguez-Triana and Inoue2020). A method of understanding design activity is to capture and measure prototypes that are considered to be important objects in the product development process, spanning both physical and digital domains (Wall et al. Reference Wall, Ulrich and Flowers1992). This capture constitutes a representation of the artefact as well as design rationale consisting of “not only the reasons behind decision but also the justification for it, the alternatives considered, the trade-offs evaluated and the argumentation that led to the decision” (Lee Reference Lee1997).
In design research, several prototype capture methods have been proposed for understanding and improving prototyping practice. Physical capture rigs include Archie (Nelson, Berlin, & Menold Reference Nelson, Berlin and Menold2019; Nelson & Menold Reference Nelson and Menold2020), Protobooth (Erichsen et al. Reference Erichsen, Sjöman, Steinert and Welo2020) and Pro2booth (Giunta et al. Reference Giunta, Kent, Goudswaard, Gopsill and Hicks2022), all of which capture pictures of physical prototypes along with some elements of rationale for their creation.
2.4. Research gap
From the sections above, the research gap to be addressed in this article can be deduced.
Prototyping is an important activity in product development that features methods in physical, digital and sketch domains. While there exist a range of strategies and recommendations for best prototyping practice, there remains a lack of clarity over which domain to use when and for which purposes they are most appropriate - no paper in the English language was found to define this. Correspondingly, the first aim of this article is to examine the purpose(s) and affordances, i.e. the quality or property of an object that defines its possible uses or makes clear how it can or should be used (Maier & Fadel Reference Maier and Fadel2009 ), of physical, digital and sketch prototypes.
Hackathons are areas of design research that can provide insights into design practices at the early stages of the design process. Given this potential, the second aim of this article is to establish and characterise the prototyping strategies and practices of teams in a virtually hosted hackathon, with the third aim being to consider recommendations for prototyping strategies in rapid innovation scenarios.
3. Methodology
This section describes the dataset used including its generation and capture and the post hoc analysis and interview protocols undertaken. An overview of the process is depicted in Figure 1.
3.1. The IDEA Challenge dataset - data generation and capture
To address the research question, analysis was carried out on an existing openly available prototyping dataset previously captured by the authors during the International Design Engineering Annual (IDEA) Challenge 2021. Further details about the hackathon (Goudswaard et al. Reference Goudswaard, Kent, Giunta, Gopsill, Snider, Valjak, Christensen, Felton, Ege, Real, Cox, Horvat, Kohtala, Eikevåg, Martinec, Perišić, Steinert and Hicks2022) and data capture methods (Giunta et al. Reference Giunta, Kent, Goudswaard, Gopsill and Hicks2022) can be found in the literature with only key characteristics of the IDEA challenge and data capture method highlighted in the following sections.
Data generation
The IDEA Challenge hackathon was devised primarily for data generation, offering a controlled yet realistic scenario for studying prototyping activities (Flus & Hurst Reference Flus and Hurst2021a). Data was generated over 4 consecutive days involving four independent teams distributed across Europe. All teams were given the same design brief on the first day. Each team comprised three or four members who worked independently and were co-located at their institutions.
Table 1 illustrates the demographics of each team, encapsulating the teams’ average age, gender distribution, current professional or academic positions, average years of design experience and specific fields of expertise, captured through an online form. A relevant degree with design content could be counted as experience, as well as academic and/or industry experience. Within each team participating in the hackathon, at least one member possessed not only academic experience but also practical experience from the industry. Participants were invited to take part in the hackathon on the basis of their roles as researchers within the engineering design community at various European institutions, ensuring expertise and active involvement in this field. Team A was from the Netherlands, Team B from Croatia, Team C from the United Kingdom and Team D from Norway.
Participants contributed from their respective home institutions, where they had ready access to familiar tools and technologies. Table 2 provides a self-reported summary of the tools and equipment available to each team for the duration of the hackathon.
The design brief given to the participants was to develop a low-cost vaccine storage unit that could facilitate vaccine distribution in rural areas of Colombia, using eggs as proxies for vaccine vials. Colombia was chosen for the range of challenges that arise because of the varied climate and transport types needed to reach remote areas. The task set forth for the participating teams effectively constituted an ‘egg drop’ challenge, but with the added critical dimension of temperature control, reflecting the real-world scenario of maintaining vaccine integrity during transportation under the diverse climatic conditions of rural Colombia.
Teams were required to create a physical prototype that could be tested and validated on the final day of the hackathon. In addition, the final submission comprised selected digital prototypes, including CAD models, renders or simulations. The performance of physical prototypes was determined via an impact test in which the egg should not break and temperature tests in which the egg must be maintained at vaccine-appropriate temperatures for as long as possible. Precise testing mechanisms were determined by the teams themselves but necessitated a specified height in which no eggs broke and the temperature drop in the prototype with regards to time, ensuring that while the methodology was flexible, the evaluation criteria were consistent and measurable. Final designs and pitches were judged by an industry expert, a senior lecturer and two members of the IDEA organising committee. Team D won the challenge, followed by Team A, Team C and team B.
Data capture
Throughout the hackathon, participants were instructed to use Pro2booth - an online prototyping capture system. Participants were encouraged to capture all prototypes and were given a thorough introduction to the system and definitions used to ensure comparability. Prototype capture was incentivised by scoring the quantity and quality (whether teams filled in all boxes and added picture(s)) of prototypes captured each day to encourage participants to capture prototypes in as near real-time as possible. A full overview of the Pro2booth platform is provided elsewhere in the literature (Giunta et al. Reference Giunta, Kent, Goudswaard, Gopsill and Hicks2022), but details relating to the system pertinent to this article are as follows.
Figure 2 presents the user interface that participants interacted with throughout the hackathon. Certain fields were auto-populated, such as Dates and IDs. Users manually entered other details, including the Prototype name, alongside a description, rationale and insights generated from each prototype and its evaluation, all provided as free text entries. This interface also allowed participants to attribute authorship – i.e. who made it, link to prior prototypes that informed their current design and upload relevant pictures or media. An iteration of a prototype counted as a separate prototype.
3.2. Coding and characterisation
Pro2booth was designed to interfere as little as possible with teams’ design processes by limiting the necessary input fields. In order to provide an overview of and compare prototyping practices, the pro2booth dataset needs coding and characterisation. The characterisation utilized comprises the domain and purpose (coded against Camburn’s four prototype purposes) of each prototype. Before coding the dataset, each prototype instance was checked for consistency between media and descriptions, purposes and insights. This process confirmed consistency across all entries and as a result none were removed.
The prototypes captured from Pro2booth were exported as a .JSON file from the web app. This data was then converted to a .CSV file to facilitate the creation of a prototyping overview for each team in MS Excel where post hoc data coding could be undertaken.
Prototypes were coded according to Qualitative Content Analysis as described by Delve (n.d.) according to the following data captured:
-
1. prototype name,
-
2. images or videos of a prototype,
-
3. description of a prototype,
-
4. rationale for a prototype’s creation, and
-
5. insights from making and evaluating the prototype.
The above coding schema constituted structural coding against prototype domains (physical, digital or sketch) and Camburn’s four prototype purposes: refinement – the process of gradually improving a design, communication – the process of sharing information about the design and its potential use within the design team and to users, active learning – the process of gaining new knowledge about the design space, and exploration - the process of seeking out new design concepts (Camburn et al. Reference Camburn, Viswanathan, Linsey, Anderson, Jensen, Crawford, Otto and Wood2017).
Figure 3 provides three sample prototypes captured during the IDEA challenge. Their characterisation with respect to Camburn’s prototyping purposes is shown in Table 3 to demonstrate the rationale behind the data codification.
Coding was carried out by a team of 2 design research experts working together synchronously.Thematic analysis of the codes was performed to draw out patterns and connections. Peer debriefing was used for validity and reliability during coding, in which all codes were discussed, conflicts identified and resolved, and consensus was made on the appropriate code for a given data entry. Prototyping purposes are not deemed to be mutually exclusive, and while each prototype could be deemed to be for all of the above purposes, based on each prototype’s description, rationale and insights, one or two key purposes were determined. Coding the dataset retrospectively, and basing each code on more than the insight obtained from building and/or testing a prototype, ensures an alignment between the intent and result for each prototype. When the intent and result of a prototype differed, it was assigned two purposes.
Inter-rater reliability (IRR) was performed post hoc where 15% of the dataset was re-coded. This was carried out in order to demonstrate consistency in coding by the design experts. Twenty five prototypes (evenly distributed between teams) were chosen at random and individually coded by the same design experts who initially coded the dataset.
IRR was calculated using Cohen’s kappa statistic (McHugh Reference McHugh2012), indicating a substantial agreement between coders ( $ \kappa =0.627 $ , $ p=0.040 $ ). For the purpose of calculating IRR, multi-purpose prototypes were categorized distinctly, incorporating both singular purpose categories and each individual combination thereof. The rater–rater matrix is available in the open dataset (Goudswaard et al. Reference Goudswaard, Gopsill, Giunta and Kent2022). The analysis demonstrated complete concurrence on 22 codes, while 7 codes exhibited partial agreement—instances where one coder selected two possible codes and the other selected one of them—and one code displayed outright disagreement. Partial agreements and disagreements were discussed and resolved as follows: Both coders revisited the prototype’s description, its intended rationale, the name and derived insights, then shared their reasoning behind their initial coding choices with regard to the definition of purpose, before agreeing on the appropriate code. This process can be exemplified by discussing a code where the coders disagreed. The prototype in question along with the initial and agreed code is shown in Table 4.
One coder initially identified the prototype as an “exploration prototype,” while another viewed it as an “active learning prototype.” The latter justified their choice by pointing out the language used in the rationale (“trying to figure out”) and the insight (“figured out”), indicating a learning process was central to the prototype’s use. Convinced by this argument, the former coder concurred, and a consensus was reached. Following a thorough review of all instances of partial agreement and disagreement, consensus was ultimately achieved on the coding of all prototypes.
3.3. Prototype dataset
The curated dataset captured during the IDEA Challenge and expanded through post-hackathon coding comprises 203 prototypes and more than 1300 relations between prototypes, projects and people. Key metrics of the dataset are summarised in Table 5, distributed across teams based on the day of creation, domains and transitions and purposes. The dataset is freely available in an open-access repository (Goudswaard et al. Reference Goudswaard, Gopsill, Giunta and Kent2022).
3.4. Post hoc team interviews
Twelve months after the hackathon, post hoc interviews were undertaken with each team.The purpose of these interviews was not merely to confirm the accuracy of the timeline but to engage participants in a reflective process where they could provide context and insight into the events and the sequence as depicted. The questions were structured to capture qualitative data on participants’ experiences and perceptions of the prototyping process, which served to cross-validate the quantitative data captured during the hackathon. By triangulating the data from these interviews with the timeline and other data sources, we aimed to construct a more nuanced understanding of the hackathon activities.
Individual teams participated in semi-structured interviews undertaken by two investigators with one leading and asking questions and the second noting responses. Interviews lasted between 20 and 35 minutes and comprised six questions. For Teams A, C and D, complete teams were present for the interviews; for Team B, only one team member was present due to lack of availability of other members or their having left the institution.
Before the interviews began, teams were shown definitions of prototype purposes and domains adopted for the study (as defined in Section 3.2). For questions 1–5, teams were shown versions of Figures 5 and 6 with only their own data included as a prompt to aid in jogging their memories and provide a more accurate recall of the events. This method helped ensure that despite the time gap, the data collected remained relevant and reflective of the participants’ actual experiences during the challenge. For question 6, teams were shown complete figures with data from other teams included to enable discussion of and reflections on the other teams’ processes, given their knowledge of how the others performed.
The interview featured the following questions:
-
1. Do the timeline and graphs look representative of your strategy during the hackathon?
-
2. Did you have any sort of prototyping strategy going into the IDEA challenge?
-
3. Can you guide us through what you were doing each day during the IDEA challenge?
-
4. Would you do anything differently if you were to re-do the IDEA challenge?
-
5. Having seen the timeline and graphs, do you think seeing it in real time would add anything of value to your design process?
-
6. Do you have any reflections on your strategy compared to other teams?
Interviews were video recorded and transcribed with analysis of the transcription undertaken by two researchers to identify key trends and themes. The results of the interviews are presented in Section 5.
4. Results I - The IDEA challenge dataset
Figures 4 and 5 present graphs highlighting trends and differences in how teams used prototypes during the IDEA Challenge. These show the number of prototypes created each day with divisions according to domains (Figure 5b), purposes of generated prototypes broken down per day (Figure 5a) and as a percentage of the total number of prototypes (Figure 4a), transitions and type of transitions between domains (Figure 4b), Features of these will be explored in the following sections with reference to which graph they can be observed in. A summary of the raw data is presented in Table 5.
4.1. Identifying similarities and differences in prototyping practices
The data represented in Figure 6 allows for a comparative analysis of prototyping practices used by each team throughout the hackathon. It shows each prototype created across teams on a timeline, with domain on the y-axis and time on the x-axis. Each prototype is colour-coded, dependent on the purpose of the prototype. Multipurpose prototypes, i.e., prototypes with more than one purpose, have two colours corresponding to each sub-purpose. It illustrates that all teams utilized the three identified prototyping domains and purposes to varying extents.
Per domain
Prototyping domains can be compared according to the total prototypes in each and the transitions between them.
An examination of prototyping domains reveals a discernible pattern in the distribution and interaction of prototyping methods (Figure 4a). While all teams engaged with each of the three prototyping domains—sketch, digital and physical—the extent of their engagement varied notably. Team B predominantly engaged in sketching, producing 31 sketches, which is approximately double the average of 15.5 sketches created by other teams, indicating that the team placed significant emphasis on the conceptualization phase by using sketches as a tool for rapid idea generation and visualization. Conversely, Team D’s focus was on physical prototyping, with a production of 36 physical prototypes, surpassing other teams’ outputs. This could indicate a focus on solution validation by testing physical designs. No teams relied heavily on digital prototypes, contributing to an average of 12.7% (SD = 3.6) of prototypes across teams.
Team D’s engagement in domain transitions was considerably higher than other teams, with 26 transitions noted, over twice the average of 12.8 (Figures 5b & 6). The difference in domain transitions is particularly emphasized by the 14 transitions between physical prototypes and sketches, possibly indicating a strategy of rapid conceptualization followed by rapid physical testing.
Team C and D transitioned between physical and digital prototypes more than the other teams, with eight transitions each - possibly to verify digital design assumptions to go from 3D CAD models to 3D printed parts. Team A transitioned between digital prototypes and sketches more than any other team, with seven transitions compared to 3.3 on average across teams.
Per purpose
Prototypes can be compared with regard to their purpose in terms of the number of prototypes created corresponding to each purpose, the number of prototypes with multiple purposes and the types of multi-purpose couplings.
Across all teams, 43.5% (SD = 8.3) of prototypes were active learning prototypes, signifying a commitment to knowledge acquisition across teams during the hackathon. Teams also focused on investigating a broad range of design possibilities, as 40.2% (SD = 14.5) of prototypes were exploration prototypes. Team C’s practice was markedly distinct, with a concentration on refinement prototypes at 63% versus 38.1% on average across all teams, suggesting a focussed approach on iterative improvements and detailing of one/few key designs. Communication prototypes were used the least across all teams, contributing to 12.3% of all prototypes (SD = 3.8) (Figure 5a).
Occurrences of multipurpose prototypes can be seen in Figure 6. Team A mostly made multipurpose prototypes on days 2–4 (Day 1 (0), Day 2 (11), Day 3 (7), Day 4 (5)) of the hackathon. These were all physical and either refinement/active learning or exploration/active learning. On the final day, the team made communication/active learning prototypes. Team B made 8 multipurpose prototypes, fewer than the other teams, and significantly lower than the 16.8 average across teams, where most were physical and refinement/active learning prototypes, with 4 and 6 occurrences, respectively. Team C made multipurpose prototypes distributed across all domains, with 3 physical, 4 digital, and 3 sketches, where 6 out of 10 of these were refinement/active learning. Team D similarly made mostly physical multipurpose prototypes, with 20 out of 26 multipurpose prototypes being physical. Fourteen of these were active learning/exploration prototypes. Multipurpose prototypes were also created across all the days (Day 1 (7), Day 2 (12), Day 3 (5), Day 4 (2)), as opposed to team A.
The most common multipurpose couple was refinement/active learning, contributing to 38.8% of the total. The prevalence of this combination indicates a common strategy of using prototyping as a means of concurrent development and knowledge acquisition. The patterns observed in the creation of multipurpose prototypes, with a majority being physical, reinforce the value placed on tangible experimentation and the direct derivation of insights from tangible interactions.
37.3% were exploration/active learning, 10.4% were communication/exploration and 9% were communication/active learning. Of all the multipurpose prototypes, 74.6% were physical, 11.9% digital and 13.4% sketches.
Per day
Across the hackathon, total prototypes generated per day and their characteristics can be compared.
Teams made a similar number of prototypes during the hackathon, averaging 51 prototypes (SD = 7.8) each. It is, however, evident that the days on which these prototypes were made differed between the teams (as shown in Figure 5b). Similar across all the teams, peak prototype manufacture occurred on days 2 or 3. Team A made 2 prototypes on Day 1, the fewest of any team, but made the most on Day 2 when they made 29 prototypes. On Day 3, they made 10 prototypes before making 5 on the final day. Team B made a similar number of prototypes each day, on average 15.3, with a small decline on the final day when they made 11 prototypes. Team C made an increasing number of prototypes during the first three days, from 5 to 11 to 19, before a decline on the final day when they made 6 prototypes. Similarly, team D made fewer prototypes on Day 1 than on the next two, making 12, 20 and 21 prototypes on the first three days. On the final day, Team D made 2 prototypes, the lowest amount of any team that day.
Day 1: As previously mentioned, team A and C made fewer prototypes on day 1 compared to team B and team D. Team A, B, and C only made sketches on the first day, with no transitions between domains. Most of these prototypes were communication and/or exploration prototypes with 1, 3, and 2 communication prototypes and 1, 14 and 5 exploration prototypes, respectively. Team D differs from the other teams on Day 1 by making 9 physical prototypes and 3 sketches. It is also evident that the main purposes of prototypes differ between team D and the other teams, with team D making 8 active learning and 3 refinement prototypes on this day (presented in Figure 5a).
Day 2: 25 of the 29 prototypes Team A made on the second day were exploration prototypes, more than any other team that day. They also made their first domain transitions on this day, making 11 physical prototypes, 7 digital and 11 sketches (shown in Figure 6). They made 9 domain transitions on this day, which was the only day the team transitioned. Team B also make their first domain transitions on day 2, making 4 physical and 2 digital prototypes. The first transition comes after making 31 sketches, more prototypes than any other team before the first transition. Team C also make their first domain transitions, making 6 physical and 2 digital prototypes. Their key purposes of the day were refinement and active learning prototypes, each with 6 concurrences. Team D continues to transition between domains on the second day with 8 transitions. Like Team C, Team D’s main purposes of that day were active learning and refinement, with 10 and 4 concurrences, respectively, as well as making 14 exploration prototypes.
Day 3: On Day 3, 51 out of all 63 prototypes were physical across all teams, the highest number of physical prototypes of all days. Teams mostly made refinement and active learning prototypes, with 44 and 24 occurrences, respectively. The number of exploration prototypes decreases to 10 from 44 on the previous day.
Day 4: On Day 4, we see the least prototypes, totalling 24 across teams compared to 63 on the previous day. Most of these are active learning (18) and refinement prototypes (8).
4.2. Purpose of prototypes in different domains
Figure 7a presents concurrences of purpose per domain across teams. Figure 7b presents the normalised purpose contribution per prototype across teams.
Figure 7a shows that physical prototypes are more used for refinement and active learning (aggregate data) during the hackathon and that physical and sketch prototypes are more used for exploration. With purposes normalised per prototype, Figure 7b shows that physical and digital prototypes are most used for refinement, physical prototypes are most used for active learning and sketches most used for exploration. These observations are corroborated with the statistical tests below.
A chi-square test of independence was performed to determine statistical differences in count data considering the occurrences of Camburn’s prototyping purposes in different prototyping domains. Chi-square tables were generated using Social Science Statistics (Stangroom Reference Stangroom2022), with p and chi-square values corroborated with GraphPad Prism 9.
The relation between these variables was significant $ {\chi}^2\left(\mathrm{6,279}\right)=41.8641,p<0.001 $ . The effect size for this finding, Cramer’s V, was strong, .30 (Akoglu Reference Akoglu2018). Chi-square values of below one indicate the number of observed cases being approximately equal (McHugh Reference McHugh2013). Table 6 presents the results from the chi-square analysis. These indicated that:
-
• physical prototypes are used more than expected for active learning and less than expected for communication and exploration,
-
• digital prototypes are used more for refinement and less for active learning and
-
• sketches are used less than expected for refinement and active learning but used more than expected for communication and exploration.
Note: $ \simeq $ indicates expected value to be approximately equal to what was observed, $ \downarrow $ indicates observed value to be lower than expected, $ \uparrow $ indicates observed value to be higher than expected.
5. Results II - post hoc interviews
Post hoc interviews were conducted as per the method defined in Section 3.4 to validate the analysis undertaken, explore its potential value and provide further insights into the teams’ prototyping practices.
5.1. Validity and day-by-day prototyping practices
Teams were asked if the timeline and graphs looked representative of their practices during the IDEA challenge (Q1), if they had any sort of prototyping strategy going into the hackathon (Q2) and to guide the interviewers through their process day-by-day (Q3).
All teams confirmed that the graphics looked representative of the design activities undertaken.
Team A acknowledged that it was difficult to remember what happened on each individual day but added that “the presentation is as I remembered it.” They tried to be systematic at first with their prototyping approach consisting of ideation, brainstorming and refining largely sketch-based ideas but quickly moved to physical prototyping when they noted other teams were making more and that they needed to start testing rather than refining ideas.
Team B divided their effort into three distinct phases: “first was sketching on day 1, then prototyping and refining on day 2 and 3, then finally testing and collecting data on day 4. The participants explained that they utilized a diverging and converging strategy, aiming to come up with multiple solutions, before building and testing to pick the most promising for further development. Team B started with functional requirements of the product and then made sketches of how different elements could be designed by following a morphological scheme. From here making and testing was more ad hoc, continuing with working designs without such a formal strategy, with the team adding “You can put anything on a paper, but when you have to gather supplies or make something fast it doesn’t always go as we thought. This is also visible in the timeline on day 2 where we just skipped to physical prototypes and stuck to that”. They also considered that “it could be possible to have a strategy. But I think because the task wasn’t complex, a strategy wasn’t necessary.” For team B, it is also noteworthy that on Day 1, from the brief they thought sketches were required above anything else, thus influencing their strategy.
Team C described their daily efforts as follows: “Day 1 was ideation, figuring out what we wanted to make, Day 2 was figuring out what we had available to make it […]. Day 3 was putting things together, and Day 4 was testing.” The team also explained that they aimed to “turn prototypes from day 1 into something that would allow us to learn whether our assumptions and presumptions were correct”. Team C initially used a double diamond approach,Footnote 2 but due to time and material constraints, this “went off the rails.” The short timescale of the hackathon coupled with the need to act within the possibilities of available materials in their environment necessitated an ad hoc approach.
Team D initially split the problem up into categories such as drop height, cost, handleability, etc. “On Day 1, we made mini prototypes to learn from within each category. On Day 2, we started to combine the mini prototypes […]. We decided on a concept and iterated on Day 3. Day 4 was wrapping up and testing.” Team D’s prototyping strategy had a goal of speed. They split the problem into as small problems as possible and tested part by part, this knowledge from testing then being brought together when enough was known about all the small problems. Examples of this are in testing just drop height or cooling material individually. The team developed ways to “mass produce” prototypes so they could test without fear of breaking or damaging their only prototype. The team emphasised that it didn´t follow a theoretical strategy, but rather built and tested prototypes, making decisions based on emerging insights and learnings. Team D were the only team who were able to maintain their initial strategy throughout the hackathon.
5.2. Reflections on prototyping practice
Teams were asked to reflect on their own prototyping strategy and practices during the hackathon (Q4).
If Team A could do something differently, they would have started physical prototyping sooner to have more time for testing different concepts. The team also reflected that they spent a long time diverging, coming up with many concepts, leaving too little time to properly converge on the best concept.
Reflecting upon their own prototyping strategy, Team B expressed that they should have “just made the box [i.e. the obvious solution] from the start, and optimize that for 4 days.” They explained that understanding the physics of the design challenge took a lot of time and could be learnt faster and earlier through physical experimentation. They also added that it was impossible to gain these insights from sketching or CAD, “In CAD everything looks fine. You have access to glue to attach everything, but [in the real world] when you finger doesn’t reach that space you have to come up with alternatives.” The team also agreed that having a better understanding of individual prototyping skills would be beneficial to divide efforts better.
Team C discussed the importance of having materials and resources available at the beginning of the hackathon. “If we had loads of scrap laying around, I imagine we would go out and pick what we could use for our prototype. (…) And I think we had made more physical prototypes then.” A team member also added, “I can imagine us just picking something from a pile and putting it on the table, and that would in itself be a prototype.”
Team D would utilize the same strategy of rapidly breaking down the problem and testing sub-components physically if redoing the hackathon. They reflected that it’s important to abandon concepts that perform poorly, but not too soon, bringing up the example of a prototype that ended up not working. “I think it was worth trying. And we scrapped it when we saw that it wouldn’t work without spending a lot of time on it”.
After answering previous questions, teams were shown full timelines from all other teams, they were asked to reflect on their strategies compared to the others (Q6), knowing the results of the challenge.
Team A stated, in reference to Team D’s strategy, that they were unsurprised that it worked as they were well prepared and they were able to verify assumptions as they went.
Team B considered that prototyping domain depends greatly on what the goal is; if optimising a particular element of performance, physical would make more sense, but if aiming to be as innovative as possible, unconstrained sketching can be better. They also reflected on their working environment, which was separate from where tools and materials were, and how this could inhibit making. They also noted (from the daily hackathon check-ins) that Team D were in a prototyping lab, and this could result in greater amounts of physical prototyping through a lower barrier to making.
Team C considered their slowness in moving to physical prototypes due to a lack of materials. If we had been better prepared with resources, “we could instead of sketching have made simple, rudimentary stuff physically, and convey the same information in less time with physical prototypes than sketches.”
Team D reflecting on their own process considered that “in a product development setting, [it is necessary] to reflect around how we can answer design questions as quickly as possible […] if you sit and sketch for a whole day, have you answered important questions? And the same for digital [sic]. Does it answer your questions? The fact that we transitioned a lot between physical and digital isn’t the most important – what we did was answer questions as fast as we could, and that happened to usually be with physical prototypes – by testing physically we immediately have something to work from. And it does not look like the other teams tried to get those concrete answers early. And that distinguishes us from the rest.”
5.3. Potential value of management information on prototyping activity
On seeing the timeline and graphs teams were asked if they thought seeing it in real time would add anything of value to their design processes (Q5).
All teams commented that real-time graphics such as those shown to the teams (Figure 6) would be useful if compared to “successful” or a prototying “gold standard” that would allow them to know if they’re on track on following a strategy far from what is considered as good practice. Teams also considered that it would “very useful for communication” post-design as it could “provide insight, even after a long time”.
6. Discussion and further work
This section will first contextualise the results presented in the previous section within the research aims posed at the beginning of the article, appraise the IDEA challenge as a research tool and consider next steps for further work.
6.1. Aim I - Examining the purpose(s) and affordances of different prototype domains
Results in Section 4.2 provide Key Finding 1 (KF1) in that physical prototypes are used more than expected for active learning, with regards to the $ {\chi}^2 $ test, aggregate purpose contributions per domain and normalised purposes. This would be expected, as physical prototypes are essential to uncover information about the problem or solution space. This is also in line with established practice (Goudswaard et al. Reference Goudswaard, Gopsill, Harvey, Snider, Bell and Hicks2021).
However, the limited use of physical prototypes for exploration, as suggested by the $ {\chi}^2 $ test and normalized purpose data, raises questions about the impact of time constraints in hackathon settings. It suggests a potential shift in strategy, where teams, under time pressure, prioritize rapid development over extensive optioneering (Dow et al. Reference Dow, Heddleston and Klemmer2009). Team D’s approach of using physical exploration prototypes from the start is particularly notable, indicating an emphasis on using prototypes to learn, as suggested by Lauff et al. (Reference Lauff, Kotys-Schwartz and Rentschler2018) and shown by others to be advantageous under time constraints (Dow et al. Reference Dow, Heddleston and Klemmer2009).
According to $ {\chi}^2 $ -test and normalised purpose data, digital prototypes were used more for refinement and less for active learning (KF2). This would arguably be expected for refinement, as the nature of digital prototypes is good for incremental change and polishing and not for the preliminary conceptual phase (Hsu & Liu Reference Hsu and Liu2000). Regarding their use for active learning, this could be expected as it is unlikely a model can provide much new information. Instead, digital prototypes tend to relate existing information, in line with Otto & Wood (Reference Otto and Wood2001)) who argue that digital prototypes in early stages are limited by imprecise and incomplete requirements. The prevalence of physical–digital domain transitions, especially on Days 2 and 3, mirrors industry practices of transitioning between the digital and physical realms, possibly due to the use of CAD and 3D printing in tandem (Goudswaard et al. Reference Goudswaard, Gopsill, Harvey, Snider, Bell and Hicks2021).
Sketch prototypes, according to the $ {\chi}^2 $ test and normalised purpose data, were more frequently used for exploration and communication (KF3). This validates their role in rapid ideation and early-stage concept development. This is consistent with existing literature that emphasizes the utility of sketches in facilitating quick iterations and fostering team communication (Henderson Reference Henderson1991; Yang & Cham Reference Yang and Cham2006).
The widespread use of multipurpose prototypes, particularly in the physical domain, highlights the multifaceted nature of prototyping. KF4 lies in the identification of dominant couplings of refinement-active learning and exploration-active learning prototypes, underscoring a holistic approach to prototype usage, where learning is closely intertwined with both exploration (divergence) and refinement (convergence).
6.2. Aim II - Characterising the prototyping practices of teams in virtually-hosted hackathons
The analysis of team prototyping practices in the virtually hosted IDEA challenge sheds light on the dynamic nature of prototyping practices under time constraints and virtual collaboration. While the teams exhibited similarities in the number of prototypes and peak prototyping periods, their approaches to domain transitions, prototype purposes and utilization of multipurpose prototypes varied considerably.
The similarity in the number of prototypes across teams and the peak output on Days 2 or 3 may reflect a common response to the hackathon’s structure and incentives. Teams were incentivised to upload prototypes by receiving daily points for the challenge. In morning check-ins, the organisers shared the total prototype count in Pro2booth and how many points each team was rewarded. Team A explained after seeing that they received the least points after Day 1, that “on the first day I remember we were not aware that we needed to make so many prototypes” and then adapted to match other teams’ outputs, exemplifying how external awareness can influence prototyping strategies.Teams were of similar size and developments were made in the same time frame, further explaining the similarities. Prototyping output peaking on Day 2 or 3 makes sense as these were the longest days of the hackathon, where the first day was partially spent collecting materials, while the fourth was spent benchmarking and pitching designs.
Teams relied more on active learning and refinement prototypes throughout the hackathon but especially towards the project’s end, with exploration more prominent in the initial days. This aligns well with the traditional product development processes, transitioning from broad exploration to focused refinement (Steinert & Leifer Reference Steinert and Leifer2012). As teams were co-located, it makes sense that fewer prototypes were made for formal communication. This is to be expected given the design scenario and would probably be higher in a real PDP, as teams would likely be required to engage with stakeholders external to the design team. This was also supported by teams A and C during interviews, both explaining that the need for communication was small because of team sizes and that the few communication prototypes they made were for internal communication.
KF5 is that active learning and refinement prototypes are critical in hackathon environments with exploration prototypes focused at the beginning of the event.
Variations in domain utilization highlight the impact of resource availability and workspace environment on prototyping choices, mirroring industry scenarios where logistics and resources can significantly shape design practices. Based on these findings, it becomes evident that design in time-intensive settings benefits from having access to a variety of tools and materials not to be limited during prototyping efforts. This is further exemplified by Team B’s initial constraint to sketches due to the lack of prototyping equipment and Team C’s delayed transition to physical prototypes due to limited materials highlighting how resource availability can significantly shape and even limit design strategies, as stated by a member of team C: “The big thing would be to get more resources ahead of the game, to give us a bit more wiggle room of what to play with.” The environment itself can also affect how teams decide to prototype, highlighted by Team B: “In our lab where we sketch, all tools and material were in another room. We only sketched and discussed in a sterile environment. Team D was in a prototyping lab,” indicating positive effects of being in a space that facilitates creativity and experimentation.
While Table 2 indicates that teams had access to mostly the same tools, the way these tools were utilized varied, a factor not solely dependent on their availability but also on the team’s understanding of their own capabilities. Team B’s observation, “We would have to know who were better in what. Like with physical prototyping and sketching. We were not aware of this in the beginning,” highlights a critical aspect of team-based design work – the alignment of tools with the specific skills and strengths of team members. This acknowledgement reflects a common challenge in collaborative design environments, where the effective use of tools extends beyond their mere availability to their strategic application based on individual proficiencies.
Tool utilization is also tied to skill, as it can be argued that prototyping is skill-based, and skills within the team will likely affect how it prototypes. A proficient prototyping team can use a range of tools efficiently and therefore move more quickly and often between domains, whereas if the team is more accustomed to solely sketching for example, then sketches become the principle design medium – not dissimilar to all problems looking like a nail when one has a hammer. KF6 lies in the elucidation that prototyping process is impacted by team capabilities, available resources and working environment.
Regarding the evolution of prototype purposes through the project, Team A showed a clear purpose evolution through the project (Section 4.1.2). They mainly used sketches for exploration in their ideation phase before transitioning to physical active learning prototypes and finally to refinement prototypes when iterating on a final design. Team B spent more time making sketches (for exploration) and digital CAD model (for active learning and exploration) than other teams before transitioning to physical prototypes (refinement and active learning). Following the final day of the hackathon, the team communicated that they would have wanted the hackathon to last longer, as they only had time to physically iterate on a single design. The team did not learn from physical prototyping as the other teams did early on and likely spent more time on sub-optimal solutions. Team C never committed to one concept and pursued two throughout the hackathon. This might explain why they made twice as many refinement prototypes compared to the other teams. Team D built concepts from the ground up, with a focus on a sub-component level. This could explain why the team used more physical prototypes largely for active learning and exploration and more domain transitions than other teams. As the team mainly prototyped sub-components, they were able to test several different concepts and solutions which is likely why they made more exploration prototypes than average. KF7 is that all teams moved through exploration, active learning and refinement but via the use of different prototyping domains. The domains impact the efficacy of these processes (considered in more depth in Section 6.1).
Strategies from each team are shown in Figure 8 where they overlaid on the prototyping timeline from Figure 6. The figure indicates similarities between Team A, B and C, who spent the initial days ideating, sketching and resource gathering, before building and refining one (or several) design, with final tests on Day 4. Similarly, Team D refines prototypes on Day 3 and conducted final tests on Day 4 but differed from other teams on the first two days.
6.3. Aim III - Providing recommendations for prototyping strategies in rapid innovation scenarios
Due to coronavirus restrictions at the time of running the IDEA Challenge, teams were left open with respect to the type of design outputs that could be generated. This, coupled with the hackathon being hosted virtually, meant that it was not possible to objectively benchmark the final prototypes that teams produced making it not possible to correlate prototyping practices with success. Scores were given based on certain performance metrics, but as teams performed tests themselves and without strict guidelines, teams could not be compared in a meaningful, objective way. We can, however, provide strategic recommendations and insights from the team’s responses to the post hoc interviews.
All teams recognised the importance of physical prototypes to validate assumptions. Team D won the IDEA Challenge 2021 and their prototyping strategy focused strongly on testing physical prototypes at a sub-component level as early as possible. In retrospect, it is evident that Team D benefited from uncovering multiple challenges and possibilities early, as illustrated by the high number of physical and active learning prototypes they created. With little time to change once a design is decided upon, the team had a better basis for decision-making than other teams that started physical prototyping later.
Sketch prototypes generated when ideating were recognised to help teams towards more innovative theoretical solutions but didn’t necessarily assist in creating a working solution - perhaps a case of the perfect being the enemy of the good. In a competitive, time-pressured situation, it could therefore be recommended to focus on physical prototypes as these lead towards functioning solutions. Without making physical prototypes unrealistic expectations of capabilities can arise. Manufacturing and time constraints can also be overlooked.
Shortcomings of digital models were identified in terms of designs looking more advanced than they really are. Team B identified CAD models as running the risk of being impossible to make.
Practical rather than theoretical prototyping strategies are important in design scenarios with compressed development cycles. All teams, except for team D, had more theoretical design strategies such as following the double diamond approach, morphological schemes or other formal idea generation techniques. Once teams became aware of time pressures and actually started making prototypes, these strategies went out the window either due to chaos or due to thinking the design challenge was simple and didn’t warrant a strategy. Team D, conversely, had a strategy based on learning as quickly as possible which was maintained throughout the hackathon and necessitated methods for efficient production of prototypes so they could be tested without fear of damaging the only prototype. This strategy was identified as being necessary due to the complexity of the presented design challenge, suggesting that other teams were dealing with unknown unknowns as they sought design solutions.
While teams were prepared to differing degrees with materials, all teams did have some availability of physical prototyping resources. It is noteworthy that a top-down approach to design focussing on ideal solutions might not be feasible due to time and resource limitations. In contrast, a bottom-up approach starts with the resources at hand, assessing what can realistically be created with the available materials. This practical perspective is particularly advantageous in scenarios where material resources are abundant, as it encourages designers to creatively utilize what is immediately available, leading to feasible and potentially better solutions.
The IDEA Challenge showed that teams need real information to judge and plan in rapid innovation events. Team B recognised that prototyping strategies need to reflect the expertise of the team – making can be great but only if you can carry this out efficiently.
In summary, the following recommendations on prototyping strategy can be made constituting KF8:
-
• Make physical as soon as possible to enable learning about the problem and solution spaces;
-
• Use sketch prototypes for ideating and developing innovative solutions;
-
• Use digital prototypes carefully, understanding their underpinning assumptions and without getting into a false sense of advancement;
-
• Employ a practical prototyping strategy that recognises the complexity of the design challenge at hand focusing on learning about what you don’t know (as opposed to validating what you already do);
-
• Use a bottom-up approach (at least in part) to focus on the available materials rather than what you don’t have access too; and,
-
• Adjust strategy and tools to be used according to the capabilities of your team.
These recommendations suggest that although theoretical prototyping strategies provide a useful starting point, success in design is determined by a team’s ability to adapt and learn in the face of the unpredictable nature of real-world design challenges. The disparity between theoretical strategies and actual, applied practices is significant. While theoretical prototyping strategies often provide a high-level, idealized framework for approaching a design problem, the actual process of prototyping – the ‘as-done’ practice – tends to be more complex and nuanced. This complexity arises from the need to adapt to the real-world constraints and opportunities that emerge during the design process.
In practice, successful design outcomes are rarely the result of rigidly following a preconceived strategy. Instead, they depend on a team’s ability to flexibly apply their skills, effectively utilize available resources and respond adaptively to unforeseen challenges and serendipitous discoveries. This flexibility is crucial, especially in time-constrained scenarios like hackathons, where the ability to rapidly prototype and iterate can significantly influence the final design outcome.
A practical prototyping strategy, therefore, is one that prioritizes learning and adaptation. It acknowledges the inherent uncertainties and complexities of the design process, and instead of rigidly adhering to a theoretical approach, it embraces a more dynamic and responsive method. This approach aligns with the realities of prototyping, where a design team must continually assess and adjust their tactics based on the evolving nature of the design challenge, the team’s collective capabilities and the resources at their disposal. KF9 is that success in hackathon-type environments requires practical prototyping strategies that prioritise learning and adaptation.Footnote 3
6.4. Appraising the IDEA challenge as a research tool
This study would argue that the IDEA challenge can be utilised as a valuable research tool for future design studies. The inaugural hackathon allowed the capture of a large prototyping dataset in a small timeframe of four days. Running it as a virtually hosted hackathon allowed independence of team design activities with multiple teams undertaking the same design task almost completely independently. Semi-structured interviews revealed that prototype capture during the hackathon could facilitate the creation of valid representations of the teams’ design processes – i.e. the as-documented is representative of the as-done. Running the IDEA Challenge also helped postulate new hypotheses, one being that building physical prototypes early in the hackathon was important, encouraging re-running of the hackathon. The hackathon also proved good for team building and networking among participating researchers.
6.5. Limitations
A potential limitation of this study is that design experts coded prototype purposes. Though coded by multiple design experts, it’s possible that purposes could be misunderstood. The coding scheme relied on two coders working synchronously, relying on neither coders having a dominant voice or sway that could skew results. There was also a variety in the amount of information that teams provided for each prototype, further increasing the uncertainty of the design experts. Self-reporting of purposes when uploading prototypes to Pro2booth could address this limitation. Making teams self-identify purposes could be more accurate, but it has its pitfalls, such as potentially inconsistent understanding of prototyping purpose terminology.
Conducting participant interviews 12 months after the conclusion of the challenge presents certain limitations that must be acknowledged. Firstly, the significant time gap can lead to memory degradation, where participants may not recall specific details or nuances of their experiences during the hackathon. This lapse can affect the accuracy and richness of the data collected, as memories may become less precise or potentially biased over time. To mitigate these limitations, we used detailed illustrations that accurately represented the prototyping activities from the hackathon. Participants confirmed that these illustrations were representative of their experiences, thereby aiding in jogging their memories and providing a more accurate recall of the events. This method helped ensure that despite the time gap, the data collected remained relevant and reflective of the participants’ actual experiences during the challenge.
The resources accessible to each team were largely dependent on their respective research groups’ environments. This means that the prototyping practices observed in the IDEA challenge could be more representative of the specific resource constraints and opportunities of each group rather than generic prototyping practice. Different teams having access to varied sets of tools and materials could significantly influence their approach to prototyping, leading to variations that are more reflective of resource availability rather than strategic choice. This factor needs to be considered when interpreting the results, as it could have impacted the teams’ decisions and outcomes.
The number of prototypes being a factor in the challenge’s incentivization might have led teams to focus on quantity over quality, possibly skewing the data towards a higher number of less developed prototypes. Incentivization was deemed necessary to make sure participants used Pro2booth to capture prototypes. As the hackathon was a competition, there was a risk that only elements carrying point-scoring ability would be pursued by the teams. This ensured that Pro2booth was used as an active and integrated part of the hackathon.
The specific context of a hackathon, with its unique pressures, goals, and team dynamics may not accurately reflect the broader range of design and prototyping processes encountered in varied professional and educational settings, although shown to have multiple similar traits. Therefore, while the insights gained provide a valuable understanding of prototyping practices within this specific context, caution should be exercised when applying these findings to other design environments or assuming their applicability across different types of design scenarios.
6.6. Further work
Future work for the next IDEA challenges includes incorporating time and cost measurements to trade off purpose, output and effort in order to develop a cost/value metric for prototypes. This study considered prototyping instances as equal; however, the different time and cost benefits of prototyping techniques should be evaluated. This could be broadened to identify trends with manufacturing methods, looking for alignment with purposes and the time and cost of these. Future studies could also look at specific knowledge dimensions covered by different domains, manufacturing methods and time/cost trade-off, as it is likely necessary to have prototypes of different purposes and to differing degrees to facilitate the generation of knowledge for different knowledge dimensions. Further research should also investigate and analyse prototype influences and pedigrees within the IDEA dataset to better understand how different prototypes affect each other and the outcome of a design process. Similarly, future work could perform a more refined granular analysis at smaller intervals.
In interviews, teams recognised it would be useful to see graphics such as those presented in Figures 5 and 6 in real-time to see how their performance can be benchmarked against a “gold-standard” project. Further work will therefore explore what this gold standard could look like and the integration of these live graphics within the Pro2booth environment.
Though this article has established differences in prototyping strategies and practices during a virtually hosted hackathon, questions still remain on how these impact design outcomes. Future studies should therefore use more prescriptive hackathon outputs to facilitate benchmarking of design performance in order to elucidate how prototyping strategies and practices impact performance. This would permit clearer recommendations on prototyping best practices in these scenarios. Future IDEA challenges will therefore conduct post hoc benchmarking of designs – either through testing in person, or, if hosted virtually, by requiring teams to ship their designs to the hosts for performance testing after the hackathon.
The article’s results and their implications provoke a broader question for the design research community regarding the realities of prototyping and practices along with how they should be taught and innovated. Results indicated that theoretical prototyping strategies didn’t last when time pressures emerged and a real thing had to be made.Footnote 4 This isn’t to say that design theory has no place – it clearly does – but it rather begs the question of how high-level strategies are balanced with the operational realities of designing. This impacts how future designers and engineers are taught to design and use prototypes and also how new methodologies are innovated within this space. Didactic design education runs the risk of abstracting away from an actual technical challenge and getting lost in the nuance of an abstract methodology. As an alternative, do you drop students in and let them figure out these realities for themselves? Or is there the potential that these can be more formally taught in some way to expedite the learning process? We don’t profess to have the answers but invite the wider research community to help consider potential solutions going forwards.
7. Conclusion
To improve understanding of prototyping practice at the fuzzy front end of design, the aims of this article were threefold and were to (i) determine the purposes for which physical, digital and sketch prototypes are most appropriate; (ii) develop an understanding of prototyping strategies and practices at hackathons; and (iii) provide recommendations on prototyping best practice at hackathons. In doing this, the article provided nine key findings (KF).
These were achieved via post hoc analysis of the IDEA challenge dataset – an archive of prototypes generated during a 4-day virtually hosted hackathon – along with purposes for their creation and insights from these. These were coded by design experts to categorise the purpose of each prototype in accordance with Camburn’s prototyping purposes. Prototyping practices could then be observed by mapping prototypes produced during the hackathon according to their domain (physical, digital or sketch) and purpose over time. The coding was validated by returning to the teams in a semi-structured interview to ascertain if the practices were representative of what actually occurred during the hackathon.
Regarding the purposes of prototypes, via means of a chi-square test of independence, it was shown statistically that during the hackathon, physical prototypes are most used for active learning and less for communication and exploration (KF1), digital prototypes are used more for refinement and less for active learning (KF2) and sketches are used less for refinement and active learning but used more than expected for communication and exploration (KF3). While this finding is congruent with existing practice, it is the first time it has been demonstrated with a real prototyping dataset. Dominant couplings of prototyping purposes were shown to be exploration-active learning and refinement-active learning (KF4).
Teams were shown to use varying prototyping processes at hackathons but key takeaways lie in active learning and refinement prototypes being critical (KF5); prototyping processes being impacted by team capabilities, available resources and working environment (KF6) and that all teams move through exploration, active learning and refinement but via use of different domains affording differing levels of success (KF7).
Recommendations on prototyping strategies during hackathon-like events (KF8) are provided based on information from the prototype dataset and post hoc interviews and include making physical prototypes as soon as possible, employing a practical prototyping strategy and adjusting said strategy to the capabilities of your team. Success requires practical prototyping strategies that prioritises learning and adaptation (KF9).
Future iterations of the IDEA challenge will capture prototype cost (time and money) in order to determine the ‘value’ of a prototype and also objectively benchmark design outputs from each team so these can be correlated with prototyping strategies used.
The implications of the work on education and practice provoke the question of how to balance theoretical vs. practical design strategies given that the former were largely shown to be abandoned once physical prototyping began and time pressures set in.
Data availability statement
The data supporting this study’s findings are openly available on Zenodo.org at https://doi.org/10.5281/zenodo.6225854.
Acknowledgements
The authors would like to thank the participants of the IDEA Challenge 2021 for taking part in this study.
Financial support
This research is supported by the Research Council of Norway through its industrial PhD funding scheme (grant number 321386). The work has been undertaken as part of Engineering and Physical Sciences Research Council (EPSRC) funded projects Brokering Additive Manufacturing (BAM) and ProtoTwinning and Royal Academy of Engineering funded Project Clean Access (grant references EP/V05113X/1 and EP/R032696/1, respectively) conducted at the University of Bristol in the Design and Manufacturing Futures Lab.