Hostname: page-component-586b7cd67f-l7hp2 Total loading time: 0 Render date: 2024-11-22T07:08:41.592Z Has data issue: false hasContentIssue false

User and stakeholder perspective taking in novice design teams

Published online by Cambridge University Press:  28 September 2022

Antti Surma-aho*
Affiliation:
Department of Mechanical Engineering, Aalto University, Espoo, Finland
Tua Björklund
Affiliation:
Department of Mechanical Engineering, Aalto University, Espoo, Finland
Katja Hölttä-Otto
Affiliation:
Department of Mechanical Engineering, Aalto University, Espoo, Finland Department of Mechanical Engineering, University of Melbourne, Melbourne, VIC, Australia
*
Corresponding author A. Surma-aho [email protected]
Rights & Permissions [Opens in a new window]

Abstract

Taking the perspective of users and stakeholders can help designers incorporate human-centricity in their practice. However, we know relatively little of the dynamics of perspective taking – a cognitive facet of empathy – in design processes as a situated cognitive and behavioural activity, rather than as an overall orientation. To illuminate how perspective taking is used in design, we carried out a longitudinal multiple case study of 49-month-long graduate-level product and service design projects, exploring differences between high and midscale performance in different design phases. Through thematic analysis of review session discussions, we find that perspective taking in high-performing sessions involves three aggregate dimensions: gathering data to form perspectives, scoping and making sense of perspectives and using perspectives in creative processing. We identify phase-dependent characteristics for the scope and emphasis of perspective taking in concept development, system design and detailed design. We also describe different ways in which novice teams struggled to create and apply user perspectives. As a result, the current study sheds light on perspective taking and the changing nature of effective perspective taking across the design process.

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2022. Published by Cambridge University Press

1. Introduction

Understanding users is a foundation for design (Sharrock & Anderson Reference Sharrock and Anderson1994; Redström Reference Redström2006). User-identified problems and workarounds enable the design of better solutions (Hyysalo Reference Hyysalo2006), user-related innovation is important for product success (Saunders, Seepersad & Hölttä-Otto Reference Saunders, Seepersad and Hölttä-Otto2011; Hölttä-Otto et al. Reference Hölttä-Otto, Otto, Song, Luo, Li, Seepersad and Seering2018) and expert designers give high priority to user- and context-related knowledge (Atman Reference Atman2019). Still, many product launches fail in the market due to lack of user acceptance (Schneider & Hall Reference Schneider and Hall2011). To start, the reasons for this can be attributed to differences in designers’ and users’ points of view (Chamorro-Koc, Popovic & Emmison Reference Chamorro-Koc, Popovic and Emmison2008) and the complexity of design processes making user-centricity challenging to practice (e.g., in architecture: Van der Linden, Dong & Heylighen Reference Van der Linden, Dong and Heylighen2019).

Perspective taking (Surma-aho & Hölttä-Otto Reference Surma-aho and Hölttä-Otto2022) by adopting another person’s point of view (Davis Reference Davis1983) is a way to improve user understanding. It represents a cognitive dimension of empathy, with the degree to which one can imagine ‘putting oneself in another person’s shoes’ measuring the capacity to engage in perspective taking (Davis Reference Davis1983). Active attempts at perspective taking lead to, for example, reduced bias in assessing the causes of others’ behaviours (Galinsky & Moskowitz Reference Galinsky and Moskowitz2000). Designers use varied perspectives (Smeenk, Tomico & Turnhout Reference Smeenk, Tomico and van Turnhout2016), analogous experiences (Johnson et al. Reference Johnson, Genco, Saunders, Williams, Seepersad and Hölttä-Otto2014) and abductive reasoning (Oygür Reference Oygür2018) to develop user understanding and explore potential solutions (Dorst & Cross Reference Dorst and Cross2001). Studies show that taking the perspectives of users aids interpersonal understanding, both in design practice (Both & Baggereor Reference Both and Baggereor2009; Kelley Reference Kelley2015; Hanington & Martin Reference Hanington and Martin2019) and in design education (Zoltowski, Oakes & Cardella Reference Zoltowski, Oakes and Cardella2012; Walther, Miller & Sochacka Reference Walther, Miller and Sochacka2017).

However, accurately understanding user perspectives and incorporating these insights into designs can be challenging, particularly for those with limited experience of design. Novice and student designers tend to find constructing user understanding challenging and may fail to consider a broad enough context (Zoltowski et al. Reference Zoltowski, Oakes and Cardella2012; Björklund Reference Björklund2013) and struggle to balance multiple user and stakeholder perspectives (including those of producers, suppliers and retailers; Scott Reference Scott2008). They may also respond too simplistically to complex feedback on their designs (Sugar Reference Sugar2001). Yet research also suggests that some novices are able to form in-depth understanding by developing genuine relationships with users and stakeholders (Zoltowski et al. Reference Zoltowski, Oakes and Cardella2012). Without prior experience and insights from user-centred projects (Popovic Reference Popovic2004; Oygür Reference Oygür2018; Van der Linden et al. Reference Van der Linden, Dong and Heylighen2019), how do novice designers attempt to understand and leverage user and stakeholder perspectives in their work?

In response to this research question, we investigated how different novice design teams used perspective taking to understand users and stakeholders, and how this understanding was used in designing solutions. The results shed light onto perspective taking processes in novice design teams and suggest how novice designers can cultivate and leverage user understanding in their work. We find evidence that perspective taking is tied not only to understanding users but also to the process of design itself.

2. Background

User-centred design (Sanders Reference Sanders1992) involves significant perspective taking efforts. Designers regularly consider the perspectives of users and other stakeholders to inform not only their understanding of the extant situation but also novel solution development. Overall, empathy for users is seen as a key process in design practice (Cross Reference Cross1982) and design thinking (Micheli et al. Reference Micheli, Wilner, Bhatti, Mura and Beverland2019). While there is a wealth of studies on designer–user interaction (e.g., Luck Reference Luck2007; Hess & Fila Reference Hess and Fila2016) as well as human-centred design cognition (Cross Reference Cross2004; Gero & Milovanovic Reference Gero and Milovanovic2021; Cascini et al. Reference Cascini, Nagai, Georgiev, Zelaya, Becattini, Boujut, Casakin, Crilly, Dekoninck, Gero, Goel, Goldschmidt, Gonçalves, Grace, Hay, Le Masson, Maher, Marjanović, Motte, Papalambros, Sosa, Srinivasan, Štorga, Tversky, Yannou and Wodehouse2022), we propose that the psychological construct of perspective taking (Davis Reference Davis1983) can provide a valuable additional lens on user-centred design.

Perspective taking represents a (a) cognitive and (b) purposeful form of empathy that is (c) influenced by a wide array of situational factors, such as observer-target similarity, the observer’s respect for the target and the perceived need of the target (Cuff et al. Reference Cuff, Brown, Taylor and Howat2014). Perspective taking has been described as a personality trait, while state perspective taking refers to intentionally taking others’ perspectives at specific moments (Clark, Robertson & Young Reference Clark, Robertson and Young2019). In this study, we examine designers’ perspective taking at specific moments.

Design presents a unique context for perspective taking. Design practice emphasises a wide range of behaviours for interpreting others’ experiences (e.g., Hanington & Martin Reference Hanington and Martin2019), whether they be the others’ knowledge, feelings, decision making logic or any such cognition. To take end-users’ perspectives, designers talk with users, imagine users and synthesise their needs, as well as test solutions with users (Hess & Fila Reference Hess and Fila2016). Also, taking part in users’ experiences and purposeful reflection can support more in-depth understanding (Kouprie & Visser Reference Kouprie and Visser2009; Smeenk et al. Reference Smeenk, Tomico and van Turnhout2016).

The understanding that designers develop through perspective taking is constructivist in nature, meaning that it is person-bound, subjective and malleable (Oygür Reference Oygür2018; Van der Linden et al. Reference Van der Linden, Dong and Heylighen2019). Designers use user understanding in formulating the problem (Ball & Christensen Reference Ball and Christensen2019) and in the generation and elaboration of solutions, that is, creative processing (Oygür Reference Oygür2018; Van der Linden et al. Reference Van der Linden, Dong and Heylighen2019; Pedersen Reference Pedersen2020). In user understanding, perspective taking supports both nonstereotypical (Ku, Wang & Galinsky Reference Ku, Wang and Galinsky2015) and nonegotistical judgements of others (Epley et al. Reference Epley, Keysar, Van Boven and Gilovich2004). As such, we might expect ‘deep’ understanding of a user group to avoid stereotypes and to be independent of the designer (e.g., a confident designer should not view all users as confident). Perspective taking also assists in negotiating contrasting points of view, such as conflicts in romantic relationships (O’Connell Corcoran & Mallinckrodt Reference O’Connell Corcoran and Mallinckrodt2000) and sales negotiations (Galinsky et al. Reference Galinsky, Maddux, Gilin and White2008). In practice, we might expect perspective taking to balance understanding of users and other stakeholders, such as suppliers or retailers.

How such balance may be achieved, however, remains unclear. Designers can struggle in correctly interpreting users’ perspectives (Heylighen & Dong Reference Heylighen and Dong2019; Chang-Arana et al. Reference Chang-Arana, Surma-aho, Li, Yang and Hölttä-Otto 2020a,Reference Chang-Arana, Piispanen, Himberg, Surma-aho, Alho, Sams and Hölttä-Otto b; Li & Hölttä-Otto Reference Li and Hölttä-Otto2020; Li et al. Reference Li, Surma-aho, Chang-Arana and Hölttä-Otto2021), and increased evaluation may be needed for a more accurate basis for perspective taking. The intentional nature of perspective taking (Zaki Reference Zaki2014) may help novice designers to balance design concerns. First, designers may consider when, how and to what degree they conduct perspective taking. Second, designers decide to pay attention to others’ perspectives and appraise them as valuable. Third, sufficient information is gathered to enable constructing an accurate perspective. These three steps may shape the quality of user understanding.

Designers also need to determine the appropriate degree and format for perspective taking to inform their creative solutions. Perspective taking can influence the problem frames adopted by designers as they formulate problems and solution conjectures (Beckman & Barry Reference Beckman and Barry2007; Paton & Dorst Reference Paton and Dorst2011). Perspective taking goes beyond understanding to influence evaluating and generating constraints, value and working principles (Dorst Reference Dorst2011). The problem frame adopted initially will shape the direction and quality of subsequent design efforts (Walz, Elam & Curtis Reference Walz, Elam and Curtis1993; Chakrabarti, Morgenstern & Knaab Reference Chakrabarti, Morgenstern and Knaab2004). As such, the impact of initial perspective taking may influence the complete design process (Walz et al. Reference Walz, Elam and Curtis1993; Chakrabarti et al. Reference Chakrabarti, Morgenstern and Knaab2004). In practice, perspective taking may, for example, influence which groups of stakeholders are solutions tested with, how designers present the value of a solution and how they respond to feedback (McMullen Reference McMullen2010). Perspective taking may also play a continuous role through iteration in design processes (Hess & Fila Reference Hess and Fila2016; Smeenk et al. Reference Smeenk, Tomico and van Turnhout2016; Heylighen & Dong Reference Heylighen and Dong2019; Xue & Desmet Reference Xue and Desmet2019).

Thus, this study explores how novice design teams acquire and leverage understanding of users’ and stakeholders’ perspectives in different phases of the design process.

3. Methods

We carried out a multiple case study on how novice design teams develop and use understanding of users’ and stakeholders’ perspectives during product and service design projects.

3.1. Case context and participants

This study focuses on a convenience sample of four student design projects at a North European university taking part in a 9-month multidisciplinary graduate product and service design project course across multiple collaborating institutions. Each of these projects had a different client company and tackled a distinct design brief, differing in the degree to which the target user group(s) and usage context had been specified at the onset by the client company (Table 1). As all four projects had distinct design briefs in different industries, with different clients and design teams, we consider these as separate cases despite the shared course context (similar to other multiple case studies, such as Van Echtelt et al. Reference Van Echtelt, Wynstra, van Weele and Duysters2008). An open-ended, ill-defined design brief required the teams to explore a wide variety of data and interact with both the client companies and the instructor team. Still, the teams made all design decisions about project scope and solutions.

Table 1. Team composition and project brief of the four cases

The graduate students taking part in the course had worked in design jobs (summer and/or part-time) and had completed undergraduate degrees in their respective fields. Each team, comprising three to four students, had at least one student from product development (studied under engineering) or industrial design (studied under arts and design) and other students without previous product or service design experience, which emulates the diversity of many real-world design teams. As such, we consider the student teams to represent novice design teams with limited design experience and skills.

3.2. Data collection

The primary data source was observations of design review sessions during the design projects, supported by intermittent project documentations and handouts from the teams. The studied project course was structured around roughly 2-week design challenges, with seven deliverables during the course. Each challenge ended in a design review session, held separately for each student team. The first two review sessions (T0) involved needfinding and focused on problem space exploration (e.g., benchmarking), and were excluded from the data collection (see Figure 1). Data were collected from five subsequent review sessions spanning five prototyping challenges focusing on solution space exploration (labeled T1–T5, see Appendix). In each design challenge, the student teams were expected to demonstrate a tested prototype considered complete for that project phase, and new project-related insights learned during the challenge.

Figure 1. Course structure, data collection points (in gray) and grade scores averaged from three sub-grades. Note that this visualisation omits time dedicated for an initial rehearsal project, holidays and dedicated documentation writing and presentation preparation.

3.3. Review sessions

Each review session lasted on average 41 minutes (ranging from 37 to 51), during which students presented their decision making and received feedback on their prototype, the design choices made and the methodologies used. The 20 observed review sessions (T1–T5 for four teams) were audio and video recorded. They were transcribed verbatim by the first author, resulting in on average 6500 words of transcribed discussion per session (ranging from 3600 to 8500).

3.4. Outcome measures

Additionally, the grades of each review session were collected and used to identify high- and midscale performers in each design challenge and phase. The grading was carried out independently by the instructional staff, consisting of a product development professor, the course coordinator (M.Sc. in engineering from product development) and three graduate course assistants from different fields (computer science, industrial design and innovation management). The design review grading rubric included three distinct sub-scores (scale 0–5) assessing the team on (a) completeness of the prototype and testing, (b) the depth and relevance of their learning and (c) their understanding of the current design challenge (see descriptive statistics per design challenge in Appendix). The review session at T5 was graded using only one general grade rather than the rubric. Overall, the grades reflect the teams’ performance both in following a useful design process and in showing project progress.

3.5. Data analysis

The design review session transcripts were analysed with a constructivist framing and with the aim of identifying how end-users’ perspectives were considered by novice design teams. The qualitative analysis process (see Figure 2) was based on thematic analysis (Braun & Clarke Reference Braun and Clarke2006). The process made frequent use of the constant comparative method, where codes were iterated upon by comparing them to each other, and as a result, new codes were created and old ones split and merged as necessary.

Figure 2. Data sources and analysis process.

The data were qualitatively coded in multiple phases (Saldaña Reference Saldaña2013), with individual arguments made during the review sessions as the unit of analysis. First, the first author familiarised with the dataset by making free-form notes during observations and by transcribing the audio files. Next, the review session transcripts were coded using line-by-line holistic coding, to identify self-standing excerpts referring to users or other stakeholders (such as potential suppliers, distributors or collaborators for the novel solution) and their perspectives. These segments were then clustered into codes based on semantic-level thematic similarity of the empirical content (Braun & Clarke Reference Braun and Clarke2006) to form the first-order categories in Figure 3. Next, focused coding was applied to identify salient processes in the design teams’ perspective taking within the descriptive categories, clustering similar themes together into the second-order categories in Figure 3.

Figure 3. Hierarchical organisation of qualitative codes developed in this study.

Throughout this process, coding and categories were discussed jointly by the authors to improve the reliability and validity of the analysis, discussing any differences in interpretations until agreement was reached (similar to, e.g., Crilly & Moroşanu Firth Reference Crilly and Moroşanu Firth2019; Lauff et al. Reference Lauff, Knight, Kotys-Schwartz and Rentschler2020). This type of group analysis practice is commonly used in qualitative research where the aim is to formulate new hypotheses about phenomena, instead of testing hypotheses through more quantifiable and replicable coding (Saldaña Reference Saldaña2013). As with all research, the positionality of the authors can be considered to have influenced coding choices. In this case, all authors have advanced degrees in engineering, have experience in teaching project-based design courses with multidisciplinary teams, including engineering design and industrial design students, and have practical experience from product and/or service design.

To supplement understanding of the design process and support full understanding of the references made in the review sessions, the teams’ handouts, reports and prototypes were examined. Overall, coding of the review session transcripts formed the main analysis of this study, with other documents further enriching our view of the four cases.

After coding the review sessions, aggregate dimensions of perspective taking were formed through cross-project-phase and cross-performance comparisons. To create these groupings, firstly, the data were grouped into three distinct project phases (concept development, system design and detailed design) based on the design goals advocated for by the course. Changes in design goals could be seen as changes in the teams’ behaviour, with them primarily striving to understand the project context and develop preliminary solutions in the concept development phase, then moving onto developing a more holistic solution in the system design phase, and finally to developing a functional proof-of-concept prototype in the detailed design phase. Second, the review session grading was used to categorise the performance of each design team in each design review session as either high or midscale. The three sub-grades were averaged into an overall grade for each review session for each team. Based on the overall grades, the review sessions were assigned into either high-performing or midscale groups by splitting at the mean overall grade. The resulting assignments were checked with the teacher-in-charge to provide further validity for the grouping. The categorisation resulted in 12 high-performing sessions and eight mediocre-performing sessions (Table 2).

Table 2. Design phase and performance categorisation

Thus, perspective taking was analysed in a 3-by-2 matrix, by project phase and design-challenge-level performance. Even though the teams’ performance changed across design challenges and even within project phases, their perspective taking patterns in each design challenge adhered to the respective performance category, regardless of their performance in previous or following sessions. For example, Teams Tennis and Finance were both midscale at T2 and exhibited similar perspective taking patterns to one another. However, at T3, the patterns of Team Tennis were similar to the other teams in the high-performing category in the system design phase, while Team Finance continued to display midscale category patterns. Further, while Team Tennis and Team Finance stayed in their respective performance categories in T4, both of their perspective taking had changed to reflect patterns of the detailed design phase.

4. Findings

Examining the review sessions of the four design projects, we saw different facets of gathering data to form user perspectives, scoping and making sense of perspectives, and using perspectives in creative processing. Furthermore, we observed clear differences across different design phases and performance.

4.1. Gathering data to form user perspectives

Table 3 shows an overview of how gathering data to form user perspectives was carried out and changed through project phases and performance categories.

Table 3. Overarching patterns and developments in gathering data to form user perspectives

Concept development: looking to understand user cognition, emotions and behaviour

While cases in both performance categories engaged in user and stakeholder research, reported behaviour in high-performing and midscale sessions differed in the scope of information sought. Teams in high-performance T1 sessions had focused on understanding user cognition and affect in addition to user behaviour and other project factors, laying the foundation for more in-depth perspective taking. This focus was expressed through inquiries about users’ cognition, such as ‘what would they want’, ‘are they interested in this’ and ‘what do they feel’. Further, the focus was shown both when exploring the value of a specific solution conjecture (‘what would [users] prefer to have for visual cues [in our solution conjecture]’), and when more generally aiming to understand users (‘what would [users] want to know if they could anonymously, non-traceably [sic] ask anything [about businesses]’).

In contrast, the goals reported in midscale sessions in T1 were primarily centred on how users behaved around solution conjectures, comprising phrases like ‘would they do it spontaneously’, ‘would they post about this on social media’ and ‘will they buy this’. Cases in the midscale category also recounted research goals that did not support learning directly, such as team building (‘it was fun to make’) and making the deadline (‘at least we have something to show you [the reviewers]’). Explicit techniques to understand users’ and stakeholders’ cognition and emotions were largely absent, suggesting focus on understanding the utility of solutions more than users’ and stakeholders’ perspectives.

These differences in foci were mirrored in the methods chosen for user research. In the midscale category, Team Tennis had only observed users and Team Farming had observed signs that their prototype had been used. These methods resulted in quantitative data, such as how many prototypes had disappeared since the team last saw them. This did not stimulate holistic perspective taking. In the high-performance category, teams had complemented observations with interviews. For example, while Team Water named user behaviour as their primary focus, stating their main goal for prototype testing as ‘[wanting] to know the minimum way of having anyone contribute to water quality monitoring’, they also showed interest in the users’ cognition and affect by interviewing users after having them interact with a prototype.

System design: targeting relevant users and stakeholders

In the system design phase, cases in both performance categories expressed user-centred data gathering patterns akin to the high-performance category in the concept development phase. Cases in both performance categories focused on learning about user cognition, emotions and behaviour alike, making this no longer a differentiating factor across higher and lower design output and process quality.

Despite capturing user cognition and emotions in addition to behaviour, some midscale-performance cases targeted users and stakeholders that were less relevant to understand given the team’s current solution conjecture. For example, Team Finance was working on a business foresight and planning tool in T3, and tested it on established small and medium enterprisess, despite having already learned that ‘many of these small companies don’t see value in long-term planning, because their work is so day-to-day and week-to-week’ and heard that ‘this might be helpful for someone who is new to business or someone who is bigger and has more resources’. However, as the team failed to target their data collection to these promising user groups, their learnings were limited to reinforcing earlier understanding of whose problems the solution conjecture might be fit to address: ‘this appeals more to those who are starting up their business’. In contrast, Team Farming in a high-performing T3 session had been able to find suitable proxies for large-scale farmers to test their prototypes with: agriculture students whose families had large farms, and the caretakers of large farms.

Detailed design: rare and focused user research

In the detailed design phase, cases in the high-performance category discussed users and their perspectives less, increasing instead discussion around feasibility and technical implementation. For example, in T4, Team Farming described an intricate database structure for a digital solution to help farmers learn from each other, and Team Tennis had coded a camera application that could recognise and modify views of tennis courts. As extreme cases, Team Water in T4 and Team Finance in T5 had not done any user research in favour of building a functional and integrated solution. When gathering further data to form perspectives was discussed, the high-performing cases focused on more specific topics than before. For example, Team Water in T5 focused on strengthening their solution conjecture by interviewing managers instead of end-users (‘[our client company] helped us interview the head of a citizen science group’) and by expanding its use case (by inquiring whether citizen scientists would be willing to verify water quality measurements from strangers to supplement the measurements of certified individuals).

The midscale cases, in contrast, still inquired users about more fundamental matters, such as Team Farming in T5 asking how often farmers use agricultural consultants’ help in decision making when their solution was partly focused on connecting farmers to consultants. Similarly, Team Tennis in T5 continued asking users ‘if they would use’ the team’s solution. Overall, while midscale cases were still trying to find overall solutions to deliver value, high-performing cases were assuring that the detailed design of their chosen solution delivered value.

4.2. Scoping and making sense of perspectives

Table 4 shows an overview of how scoping and making sense of perspectives were carried out and changed through project phases and performance categories.

Table 4. Overarching patterns and developments in scoping and making sense of perspectives

Concept development: sensemaking by moving between generalisations and data

The high-performing cases in T1, enabled by the user data that they had gathered, moved between generalisations and data in their reasoning. They grounded higher-level interpretations in direct and less-ambiguous learnings. For example, Team Water found that laypeople valued seeing water quality being measured in their local area: ‘people really wished for and liked the fact that something is going on locally, that ‘hey, they are actually measuring the water quality’. This generalisation was supported by less-abstract inferences and observations, such as quoting specific users (‘this one lady in particular was very delighted to see that ‘oh, they are measuring it here’, ‘oh, it’s really nice and I really want to help because it’s happening here in my local area’) and describing tangible behaviour (‘even a cyclist stopped to look at it and then moved on’). The team also used a metaphor to further elaborate on the generalisation: ‘it’s not something like, you are reading the newspaper and it’s only at the end, only if you’re bored you get to the part where they say that there was water quality measuring going on. Here, they could see’. The combination of generalised insights and tangible learnings gave user-centred arguments more transparency.

The midscale-performing cases did not move between data and generalisations when discussing users and stakeholders, instead only reporting data. For example, Team Tennis recounted various observations of user behaviour around the prototype, such as ‘they were taking photos’, ‘three people actually played and ‘people just gave the racket to us and then left’, but failed to voice generalisations based on these statements. Team Farming explicitly referenced difficulties in generalising: ‘we don’t know what we learned, we don’t know what it means if no-one bought anything or if everything sold out’. Thus, cases in the midscale-performance category were less effective in building user-centred generalisations.

The midscale cases traced their inability to generalise user-centred knowledge to suboptimal method choices and goals. Team Tennis acknowledged that their test had failed to produce relevant learnings due to various reasons, such as the wrong type of participants (‘we needed people with time, so for example tourists would have been perfect’), the weather (‘the ground was wet, and I think the ball is probably still wet’) and the prototype setup (‘I guess we would’ve needed a more proper setup anyway, as this was kind of sketchy’). Team Farming expressed troubles in planning prototyping and testing (‘we didn’t know which ideas we should make a prototype out of’). While Team Farming wanted to understand users and to design valuable solutions (‘[our focus in the test was too broad] so that we could finally find the key needs so that we can start focusing on something and not go all directions’), they did not know how to do it.

System design: scoping and sensemaking by grouping users

In the system design phase, cases in both performance categories moved between generalisations and data, akin to high-performing cases in the concept development phase.

Also, all cases now grouped users and stakeholders, displaying distinct needs, problems and value propositions for each group. In the high-performance category, Team Tennis at T3 had bracketed users by age and role in the tennis community: young players, middle-aged players, tennis coaches, casual players, competitive players, and tennis club and federation representatives. Thus, the team was able to define distinct types of value that their solution conjecture, a digitally run tennis cup, could provide for each group, such as ‘the value for most casual players is this matchmaking [feature]’, ‘for younger people, the badges, points, statistics, are more motivating’ and ‘[a local club manager] is looking for solutions that would help in engaging the more casual players in the club’. Similarly, Team Water in T2 explored potential value to two user groups, planning to tailor their solution to one or the other: ‘we have to make a decision of whether we’re going to cater [the solution] to citizen scientist groups or if we are still going to try to do something with outdoor enthusiasts’.

While some user groups were portrayed stereotypically (such as younger tennis players liking gamification elements), the grouping brought structure to the design projects, thus supporting further design work. Grouping acknowledges the existence of distinct perspectives and proposes that certain group characteristics create specific cognition, emotions and behaviours in the group. For example, Team Farming in T2 had defined two target groups: ‘farmers that are planning a project, like building a new cowhouse’ and ‘farmers who already have done or are doing a similar project’. They also specified distinct values to both groups, such as the former wanting ‘more knowledge’ and latter always having ‘time for other farmers, since the days go by faster on the tractor if they’re talking to someone’. In this high-performing case, the team further specified interactions between the two groups (‘sometimes they call unknown farmers, sometimes farmers they already know’) and general characteristics beyond the groups (‘they used the old kind of phones’). Detailed user groups also helped identify knowledge gaps and target future user research efforts, with Team Water in T2 describing how they had not ‘talked in-depth with citizen scientists’, a potential user group, but would have ‘at least two video calls with [them] next week’. Overall, grouping users helped high-performing cases scope user research efforts.

However, while cases in the midscale-performance category had narrowed their focus to specific groups, they had not considered the perspectives of some user groups central to the design problem, causing challenges in solution development. For example, Team Finance in T2 had tested a system to market sustainable businesses primarily with business owners, omitting consumer perspectives. While summarising that ‘the idea of free advertising for [small and medium enterprise owners] is fantastic’ along with various aspects that business owners would wish to highlight about their companies, consumers were considered only as recipients of information rather than as a group whose perspectives should be understood: ‘we’d show this info on the window so that all customers could see it’. The team also proposed future work that repeated this stereotypical view of consumers: ‘we thought we might set up a prototype of these ads to see if people passing by look at them’. Ultimately, the team did not know if their solution was valuable or not due to having omitted the consumer perspective: ‘we still don’t know if this will actually make a difference, whether people will go and buy stuff from these stores, or if this will be just another one of those websites or stores that open and then shut down because nobody goes there’. Thus, informed solution design in the system design phase required that teams had identified groups that were central to their solution conjecture.

Detailed design: less moving between generalisations and data, and increased use of nonuser-centred arguments

In the detailed design phase, cases in both performance categories moved between generalisations and data as well as scoped perspectives through user grouping, akin to the high-performance category in the system design phase. However, high-performing cases differed from midscale performers by occasionally omitting generalisation and using nonuser-centred arguments in scoping.

First, instead of moving between generalisations and data, the high-performing cases restated users’ comments and experiences regarding detailed solution features, and immediately moved to using the perspectives in creative processing. For example, Team Water in T5 got requests for added features, which they simply listed as: ‘[a citizen scientist] wished to add some contextual data, for example, what is the weather situation, algae concentration, ….’ When new user information was focused on detailed aspects of the solution, teams may have been able to quickly assess its relevance and put it to use.

Second, instead of scoping understanding through user groups, high-performing cases used project-centred arguments. These included dismissing solution-incompatible comments as minor limitations, deferring them as aspects to work on if the solution was developed further by the client company, and ruling some areas out of scope due to time and other resource limitations. For example, Team Farming in T4 explicitly assigned boundaries to the types of user understanding needed by saying that ‘documentation is not the point of the project’.

In contrast, the midscale-performing cases in this phase adhered to patterns similar to the high-performance category in the system design phase, focusing on generalising user perceptions about the solution. While the cases now specified pertinent user groups, they did not shift their focus onto detailed design. For example, Team Finance in T4 had established the value of their solution conjecture to users by moving between generalisations and data (‘[business owners said that] this [solution] is something they’ve never seen, that it is great to have all this data in one place, and that this helps them find new market areas”), but had not resolved the specifics of delivering this value (“we are struggling with what [types of data] we should add and what [business owners] want’).

4.3. Using perspectives in creative processing

Table 5 shows an overview of how using perspectives in creative processing was carried out and changed through project phases and performance categories.

Table 5. Overarching patterns and developments in using perspectives in creative processing

Concept development: generalising, improving concepts and articulating value

The use of user and stakeholder perspectives in creative processing could be seen in generalising user-centred knowledge, planning improvements for solution conjectures and defining the value of solution conjectures.

First, high-performing cases in T1 used user perspectives to make user-related generalisations. Specifically, these cases modelled user cognition, namely their thought processes and boundary conditions for decision making. For example, Team Water outlined how to design a sign to encourage people to participate in water quality measurement: ‘first you have to capture people’s attention, then you have to show them that ‘if this happens, then do this’ and then you have to give them instructions, all in the same sign’. Similarly, Team Finance described how price and distance were key influences on how consumers make small purchasing decisions: ‘even though it’s double the price, if [the price of a cinnamon roll] is one euro or two euros it’s so little that if there’s distance [involved], the price doesn’t really affect the decision’. For midscale cases, narrow research foci and suboptimal methods limited user-centred learning, leading to limited capability in applying user perspectives further.

Second, the high-performing cases used user perspectives in improving solution conjectures. Here, the plans presented were actionable and practical, including statements about modifying the solution design (Team Finance: ‘how about we make six choices, like when you go and register your company, you’d have six different [support packages to choose from]’) or focusing on a new aspect of the solution in future work (Team Water: ‘how do we [from the user’s point of view] connect two things that would [physically] be separate’). Further, these teams came to the review sessions having already iterated on their solution conjectures based on user research. For example, Team Water had changed the wording and layout of a sign indicating water quality measurements could be done by passers-by. Midscale cases did not evidence iterations in their review sessions.

Third, high-performing cases used perspective taking to define the potential value of solution conjectures upfront. For example, Team Finance justified solution designs with hypothesised limitations of existing solutions from the perspective of the user (‘it’s not personal nor interesting [to the user]’), users’ unmet needs and frustrations (‘we have heard over and over how hard it is [for business owners] to find time for themselves) and benefits of the new solution (‘we wanted to show [users] that we care about them). In contrast, midscale-performing cases primarily described expected behaviour with their respective prototypes.

Despite their shortcomings in gathering data to form user perspectives, the midscale-performing cases tried to use human-centred reasoning in creative processing. However, rather than referencing specific user insight, these cases often resorted to egotistical or stereotype-based reasoning. For example, Team Farming highlighted a generic farmer comment on pricing, ‘it’s too expensive to buy fertilizer’, and claimed that their prototype was meant to sell fertiliser for a cheaper price in smaller quantities. When questioned by the reviewers about whether smaller containers of fertiliser would be useful for farmers, the team changed their target user on the fly into people with home plants: ‘this is enough for one flower. At least in my home we don’t have that many flowers that I would want to buy the whole package of fertilizer. So that’s why we went with a bit tinier scale’. This indicates that the team had not built a path from what they knew about farmers’ behaviour and cognition to how their solution conjecture would provide value, and were instead reverting to egotistical reasoning about their own needs. Team Tennis, in turn, reverted to stereotypical reasoning when hypothesising that ‘general tennis fans’ would get excited by a pop-up tennis court.

Both midscale-performing cases referenced the time pressure from course deadlines as a reason for limited perspective taking: ‘we didn’t have time [to iterate]’ and ‘we were so last-minute’. Together, suboptimal method choices and pressure to deliver made the midscale cases resort to making, as Team Farming put it, ‘something to show’ because ‘it is better than nothing’. In contrast, despite operating on the same timeline, cases in the high-performance category found ways to extrapolate knowledge from the user perspectives they had formed. For example, Team Finance was able to establish other-oriented value statements concisely, such as ‘we were thinking about what a personal care item would be for a business, and one of us thought it was a massage. You know, business owners are so tired, especially […] when they work on their own, they have no time to take care of themselves’. This shows generalisation of user cognition (no time to care for self) and what the team’s hypothesis for value is (a massage coupon will work as a personal care item).

System design: emphasis on improving solution conjectures

In the system design phase, cases in both performance categories referenced user perspectives in creative processing akin to high-performing cases in the concept development phase. The performance categories were differentiated by their ability to voice plans for improving solution conjectures.

Here, cases in the high-performance category voiced plausible design directions based on what they had inferred about users, such as ideas for modifying the solution design or shifting user research in a different or more specific direction. For example, Team Water in T3 pointed out not only challenges in their solution design, such as users being confused about when to look at a mobile app and when to handle the connected sensor, but also proposed concrete ways to remedy these shortcomings, such as ‘you [could] use the application in a way that you turn it on and set ‘measure now, then you can put your phone in your pocket or leave it at the shore, then you can go and do the measurement, and then this device will indicate to you with a sound signal and a light that it has now done the measurement, and then you can just pick it up and all the data will automatically be in your phone’. This quote shows that the team had a detailed vision of how to improve their solution conjecture based on users’ perspectives.

In contrast, midscale-performing cases only made user-related generalisations or established the value of their solution conjecture, failing to voice specific plans for improving it based on user insights. This was exemplified by Team Finance in T3: ‘this [solution] would be useful for businesses who are starting up, who don’t have these established systems’. While this finding could lead the team to, for example, tailor their solution towards businesses who are starting up, the team did not describe such plans even when directly prompted, as they had not investigated such businesses. Curiously, in the concept development phase, Team Finance had voiced several promising plans tied to user perspectives. In the system design phase, the team partly resorted to binary (yes/no) statements about the solution’s usefulness rather than voicing underlying needs or structured insights. In T3, the team explained these shortcomings with time pressure (‘we had so little time that we didn’t have a chance to [test with customers] yet’, ‘this was basically created in 24 hours’). Team Tennis in T2 also echoed similar explanations: ‘we aren’t yet finished with this [testing]’. While the midscale-performing cases in this phase did not explicitly admit to doing user tests solely for the sake of course deliverables, the time pressure may have forced them to build and test new versions of the solution conjecture faster than they could decide on what aspects truly needed testing and refinement.

Detailed design: focus on specificity of improvements and value delivery

In the detailed design phase, cases in the midscale-performance category showed similar patterns as high-performing cases in the system design phase. Cases in the high-performance category instead omitted generalising user-centred knowledge, focusing on specific improvements for the solution and convincingly describing how it could deliver value.

First, generalisations were overall less numerous in the detailed design phase than in prior phases and were only made by midscale-performing cases. For example, Team Finance in T4 described how small and medium enterprises carry out social media marketing.

Second, while cases in both performance categories highlighted necessary improvements in their solutions, the high-performing cases focused solely on specific improvements. For example, in their high-performing T5, Team Water described needing to improve the audiovisual signals their device gave: ‘we haven’t exactly figured out what sound [the solution] will make, but it’ll be something less concerning for when it connects, and when you lose the connection it’ll be more annoying’. Team Farming in T5, despite being a midscale-performing case, showed similar attention to detail: ‘it’d be good to have a marker for if a pesticide is dangerous for bees, because farmers don’t want to kill them since it influences yield’. However, the midscale-performing cases admitted that significant portions of their solutions were not yet functional, making it possible that the teams would unearth new needed improvements. For example, when asked whether their prototype was using real data, Team Farming in T5 responded by listing a plethora of aspects yet to be built, including creating ‘simulated data’, ‘farmer profiles with built-in fields’ and the capability to spot pests and see live suggestions from other users. In contrast, Team Finance in T5 had a prototype of a website in which they used real statistics on consumers and small businesses from an official statistics bureau. Thus, the functionality of prototypes influenced how final and polished the proposed improvements were.

Third, while cases in both performance categories established the value of their solutions through user perspectives, the high-performing cases did so more convincingly. For example, Team Water in T5 minimised their solution’s limitations and highlighted positives (‘even though the people we interviewed saw some of these practical problems, they really appreciated the opportunity to expand the usability of the product’) and expressed what users can get from the solution (‘[with this product] they can keep track of and capture data in a more meaningful way’, and ‘[this product] could help the citizen science group reach level two, where their data will be accepted by [country]’). Similarly, Team Finance in T5 had specific taglines (‘it’s a toolkit guiding small businesses to success in a changing environment’) along with positive feedback from the client company. In contrast, Team Tennis in T5 had a bleaker outlook: ‘people said they would use it maybe, well, once or twice. Maybe with extra features they’d use it more often, but it depends on what those features are…’.

5. Discussion

This study investigated how novice design teams used perspective taking to understand users and stakeholders and how perspectives were used in solution design. Despite perspective taking being a central part of human-centred design (Hess & Fila Reference Hess and Fila2016), its mechanisms are not yet fully understood. Especially, novice designers have been reported to both succeed and falter in their perspective taking efforts (Sugar Reference Sugar2001; Scott Reference Scott2008; Smeenk et al. Reference Smeenk, Tomico and van Turnhout2016), suggesting that a more nuanced understanding of their processes is needed. In this study, perspective taking patterns were analysed under three dimensions: data collection to form user perspectives, scoping and making sense of perspectives and using perspectives in creative processing.

5.1. Summary of findings

Perspective taking patterns differed across project phases and novice design teams’ intermittent performance. In data collection to form user perspectives (Table 3), in the first two project phases, cases in the high-performance category focused on understanding user cognition, emotions and behaviour. In turn, midscale performers struggled to investigate user cognition and emotions as well as to find the right users to study. In the detailed design phase, cases in the high-performing category collected increasingly specific user data and focused on solution implementation, while midscale performers collected more general data on user cognition, emotions and behaviour.

In scoping and making sense of perspectives (Table 4), in the first two project phases, high-performing cases moved between user-centred generalisations and data (in the concept development phase) as well as scoped future design efforts by defining user groups along with their specific cognition, emotions and behaviour (in the system design phase). In contrast, midscale-performing cases presented only pieces of data (in the concept development phase) and omitted key user groups. In the detailed design phase, high-performing cases omitted moving between data and generalisations and used nonuser-centred argumentation in scoping decisions, while midscale-performing cases showed patterns akin to high-performing cases in earlier phases.

In using perspectives in creative processing (Table 5), in the first two project phases, cases in the high-performance category generalised user-centred knowledge, using it to plan improvements to and establish the value of solution conjectures. Midscale-performing cases, instead, used egotistical and stereotypical reasoning (in the concept development phase) and struggled in detailing plans for improving solution conjectures (in the system design phase). In the detailed design phase, high-performing cases omitted user-centred generalising, instead focusing on specific user-centred solution improvements and value statements. In turn, midscale-performing cases made user-centred generalisations, with their solutions requiring more significant improvements and their value statements being less convincing than those of the high-performing cases.

5.2. Overarching patterns in perspective taking and team performance

Two overarching observations can be made from the findings, contributing to understanding failure in perspective taking and to design-phase-specific perspective taking.

First, the current study reveals several challenges novice design teams faced in creating and applying perspectives. The performance category of three of the four teams changed throughout the design project, with only Team Water placing consistently in the high-performing category. Our findings show how the teams’ shortcomings could be traced to perspective taking – initially in gathering and building user understanding, and later in using user-centred knowledge to design and build a valuable solution. Still, the issues faced by each team were unique. For example, Team Finance was high performing in early exploratory design but struggled in using perspectives in creative processing in the system design phase. Team Farming was the opposite, struggling in early exploratory design but learning from their early mistakes, both constructing and using user-centred knowledge in the system design phase. Thus, perspective taking in design can fail in different ways. Team Farming in concept development, alone, failed in multiple ways: being unable to generalise meaningful characteristics of farmers (Dorst Reference Dorst2011), choosing nonimmersive prototype testing methods (Zoltowski et al. Reference Zoltowski, Oakes and Cardella2012), using egotistical instead of user-centred reasoning (Epley et al. Reference Epley, Keysar, Van Boven and Gilovich2004) and ultimately understanding users inaccurately (Chang-Arana et al. Reference Chang-Arana, Surma-aho, Li, Yang and Hölttä-Otto 2020a,Reference Chang-Arana, Piispanen, Himberg, Surma-aho, Alho, Sams and Hölttä-Otto b).

The results also suggest that initial failure in perspective taking does not necessarily compound throughout the design process. For example, Team Farming was able to recover from poor perspective taking in the concept development phase, and Team Finance delivered good results at the end of detailed design despite struggling in system design. This suggests that high-quality perspective taking at a given point in the design process can mitigate shortcomings in previous phases. Conversely, high performance at an earlier design phase did not always lead to high performance in subsequent phases. Overall, this study specified distinct ways in which perspective taking in novice design teams can fail, but more research is needed to investigate the temporal and causal processes of perspective taking failure.

Second, the results reveal interesting dynamics in the changing scope and nature of perspective taking across different design phases. Midscale-performing cases often displayed perspective taking patterns characteristic to high-performing cases in prior project phases. Such ‘lagging behind’ suggests that different phases require distinct types of perspective taking. Further, novices may struggle in either keeping up with the project pace or matching perspective taking behaviours to the design phase. Acknowledging these patterns may support design educators in guiding project-based learning.

Also, high-performance perspective taking changed at each project phase. It was connected in the concept development phase to depth and breadth exploration, in system design to in-depth exploration within a specified and purposeful scope, and in detailed design to narrowing down and ‘freezing’ the previously explored perspectives. Past research has shown that design experts will sample depth-first exploration approaches to probe for solution viability and feasibility (Ball et al. Reference Ball, Evans, Dennis and Ormerod1997). The current results suggest that in the context of perspective taking, managing tradeoffs between exploration depth and breadth on the one hand, and perspective application depth and breadth on the other hand may represent a key area of design decision making. Similarly, past studies have shown design experts to engage with preliminary evaluation earlier than novices in developing design solutions (Ahmed, Wallace & Blessing Reference Ahmed, Wallace and Blessing2003). In the current study, this can be compared to the earlier manifestation of perspective scoping and sense making for higher-performing cases compared to midscale-performing cases.

5.3. Limitations and future research directions

This study used a graduate student project dataset, which limits its generalisability. Our results reflect novice design teams whose members had little project-specific knowledge at the start. The collected data are also limited to proof-of-concept-level product and service design, not including initial user research phases nor later refining of designs into market-ready offerings. More research would be needed to investigate the extent to which the found perspective taking patterns appear in professional and experienced design contexts as well as in earlier and later phases of the design process. In light of the current study, the effects of perceived time pressure – a variable that has eluded many existing research (e.g., Kouprie & Visser Reference Kouprie and Visser2009; Oygür Reference Oygür2018) – could be a particularly interesting variable to explore in such studies.

Also, much of the design process happens outside review sessions. Our dataset did not include observations or documentation from the time that the teams conducted practical design tasks, such as concept generation, user interaction and prototype building. While observing review sessions provided a summary of the teams’ key processes and the final rationales they reached, more targeted studies could uncover valuable practical processes of design teams’ internal user-centred negotiations, such as selecting who to interact with and what user information to use and how. Hence, micro-level studies of perspective taking would be a valuable avenue of future work. Data from the users and stakeholders could also enable evaluating the accuracy of the generated perspectives and generalisations formed based on them. Experimental studies, in turn, might investigate how changing the amount and salience of available time influences student and professional designer perspective taking.

6. Conclusion

This study investigated perspective taking in user understanding and solution design, within the design processes of novice teams. Based on a longitudinal study of four design projects, we describe how perspective taking manifests in novice design processes, changing by project phase and by team performance. Overall, perspective taking manifests through three aggregate dimensions: collecting data to form user perspectives, scoping and making sense of perspectives and using perspectives in creative processing. In earlier exploratory design phases, high-performing cases focused on exploring perspectives in breadth and depth, while midscale-performing cases struggled in data collection and by extension in both learning and applying user-centred insights. Later, in detailed design, high-performing cases narrowed down their perspective taking in favour of implementation activities, while midscale-performing cases explored user perspectives to guide solution design. This study also showed that novice design team performance varied throughout the process, supporting the notion of different types of perspective taking being necessary at different design phases. Also, midscale-performing cases were generally ‘lagging behind’ in their perspective taking patterns, suggesting that novice designers struggle in keeping up with project pace and matching their perspective taking approaches to the design phase. These findings build nuanced understanding of perspective taking in user-centred design processes and can support project-based design education.

Relevance to design practice

High-quality perspective taking hinges initially on collecting broad user data and accurately making sense of it, whereas latter phases call for narrowing down the focus of activities and following through on the implications of prior perspective taking on the design solution.

Financial support

This work was supported by the Future Makers grant of the Technology Industries of Finland Centennial Foundation and Jane and Aatos Erkko Foundation.

A. Appendix

A.1. Design challenges and grade scores

The five design challenges whose review sessions were observed in this study are described below, along with their grading. Overall, the three sub-grades showed similar means and standard deviations σ (completeness = 4.0, σ = 0.8; learning = 3.6, σ = 1.0; understanding = 3.8, σ = 1.3).

References

Ahmed, S., Wallace, K. M. & Blessing, L. T. M. 2003 Understanding the differences between how novice and experienced designers approach design tasks. Research in Engineering Design 14 (1), 111; doi:10.1007/s00163-002-0023-z.CrossRefGoogle Scholar
Atman, C. J. 2019 Design timelines: concrete and sticky representations of design process expertise. Design Studies 65, 125151; doi:10.1016/j.destud.2019.10.004.CrossRefGoogle Scholar
Ball, L. J. & Christensen, B. T. 2019 Advancing an understanding of design cognition and design metacognition: progress and prospects. Design Studies 65, 3559; doi:10.1016/j.destud.2019.10.003.CrossRefGoogle Scholar
Ball, L. J., Evans, J. B. T., Dennis, I., Ormerod, T. C. 1997 Problem-solving strategies and expertise in engineering design. Thinking & Reasoning 3 (4), 247270; doi:10.1080/135467897394284.CrossRefGoogle Scholar
Beckman, S. L. & Barry, M. 2007 Innovation as a learning process: embedding design thinking. In California Management Review. University of California Press; doi:10.2307/41166415.Google Scholar
Björklund, T. A. 2013 Initial mental representations of design problems: differences between experts and novices. Design Studies 34 (2), 135160; doi:10.1016/j.destud.2012.08.005.CrossRefGoogle Scholar
Both, T. & Baggereor, D. 2009 Bootcamp Bootleg, online document https://dschool.stanford.edu/resources/the-bootcamp-bootleg. Accessed: 18.6.2022.Google Scholar
Braun, V. & Clarke, V. 2006 Using thematic analysis in psychology. Qualitative Research in Psychology 3 (2), 77101; doi:10.1191/1478088706qp063oa.CrossRefGoogle Scholar
Cascini, G., Nagai, Y., Georgiev, G. V., Zelaya, J., Becattini, N., Boujut, J. F., Casakin, H., Crilly, N., Dekoninck, E., Gero, J., Goel, A., Goldschmidt, G., Gonçalves, M., Grace, K., Hay, L., Le Masson, P., Maher, M. L., Marjanović, D., Motte, D., Papalambros, P., Sosa, R., Srinivasan, V., Štorga, M., Tversky, B., Yannou, B. & Wodehouse, A. 2022 Perspectives on design creativity and innovation research: 10 years later. International Journal of Design Creativity and Innovation 10 (1), 130; doi:10.1080/21650349.2022.2021480.CrossRefGoogle Scholar
Chakrabarti, A., Morgenstern, S. & Knaab, H. 2004 Identification and application of requirements and their impact on the design process: a protocol study. Research in Engineering Design 15 (1), 2239; doi:10.1007/s00163-003-0033-5.CrossRefGoogle Scholar
Chamorro-Koc, M., Popovic, V. & Emmison, M. 2008 Using visual representation of concepts to explore users and designers’ concepts of everyday products. Design Studies 29 (2), 142159; doi:10.1016/j.destud.2007.12.005.CrossRefGoogle Scholar
Chang-Arana, Á. M., Surma-aho, A., Li, J., Yang, M. C. & Hölttä-Otto, K. 2020a Reading the user’s mind: designers show high accuracy in inferring design-related thoughts and feelings. In Proceedings of the ASME Design Engineering Technical Conference. American Society of Mechanical Engineers (ASME); doi:10.1115/detc2020-22245.Google Scholar
Chang-Arana, Á. M., Piispanen, M., Himberg, T., Surma-aho, A., Alho, J., Sams, M. & Hölttä-Otto, K. 2020b Empathic accuracy in design: exploring design outcomes through empathic performance and physiology. Design Science 6, e16; doi:10.1017/dsj.2020.14.CrossRefGoogle Scholar
Clark, M. A., Robertson, M. M. & Young, S. 2019 “I feel your pain”: a critical review of organizational research on empathy. Journal of Organizational Behavior 40 (2), 166192; doi:10.1002/job.2348.CrossRefGoogle Scholar
Crilly, N. & Moroşanu Firth, R. 2019 Creativity and fixation in the real world: three case studies of invention, design and innovation. Design Studies 64, 169212; doi:10.1016/j.destud.2019.07.003.CrossRefGoogle Scholar
Cross, N. 1982 Designerly ways of knowing. Design Studies 3 (4), 221227; doi:10.1016/0142-694X(82)90040-0.CrossRefGoogle Scholar
Cross, N. 2004. Expertise in design: an overview. Design Studies 25 (5), 427441, online document (downloadable on June 26th 2018) http://oro.open.ac.uk/3271/1/Expertise_Overview.pdf.CrossRefGoogle Scholar
Cuff, B. M. P., Brown, S. J., Taylor, L. & Howat, D. J. 2014 Empathy: a review of the concept. Emotion Review 8 (2), 144153; doi:10.1177/1754073914558466.CrossRefGoogle Scholar
Davis, M. H. 1983 Measuring individual differences in empathy: evidence for a multidimensional approach. Journal of Personality and Social Psychology 44 (1), 113126; doi:10.1037/0022-3514.44.1.113.CrossRefGoogle Scholar
Dorst, K. 2011 The core of “design thinking” and its application. Design Studies 32 (6), 521532; doi:10.1016/j.destud.2011.07.006.CrossRefGoogle Scholar
Dorst, K. & Cross, N. 2001 Creativity in the design process: co-evolution of problem-solution. Design Studies 22 (5), 425437; doi:10.1016/S0142-694X(01)00009-6.CrossRefGoogle Scholar
Epley, N., Keysar, B., Van Boven, L. & Gilovich, T. 2004 Perspective taking as egocentric anchoring and adjustment. Journal of Personality and Social Psychology 87 (3), 327339; doi:10.1037/0022-3514.87.3.327.CrossRefGoogle ScholarPubMed
Galinsky, A. D. & Moskowitz, G. B. 2000 Perspective-taking: decreasing stereotype expression, stereotype accessibility, and in-group favoritism. Journal of Personality and Social Psychology 78 (4), 708724; doi:10.1037/0022-3514.78.4.708.CrossRefGoogle ScholarPubMed
Galinsky, A. D., Maddux, W. W., Gilin, D. & White, J. B. 2008 Why it pays to get inside the head of your opponent: the differential effects of perspective taking and empathy in negotiations. Psychological Science 19 (4), 378384; doi:10.1111/j.1467-9280.2008.02096.x.CrossRefGoogle ScholarPubMed
Gero, J. & Milovanovic, J. 2021 The situated function-behavior-structure co-design model. CoDesign 17 (2), 211236; doi:10.1080/15710882.2019.1654524.CrossRefGoogle Scholar
Hanington, B. & Martin, B. 2019 Universal Methods of Design Expanded and Revised: 125 Ways to Research Complex Problems, Develop Innovative Ideas, and Design Effective Solutions. Rockport.Google Scholar
Hess, J. L. & Fila, N. D. 2016 The manifestation of empathy within design: findings from a service-learning course. CoDesign 12 (1–2), 93111.CrossRefGoogle Scholar
Heylighen, A. & Dong, A. 2019 To empathise or not to empathise? Empathy and its limits in design. Design Studies 65, 107124; doi:10.1016/j.destud.2019.10.007.CrossRefGoogle Scholar
Hölttä-Otto, K., Otto, K., Song, C., Luo, J., Li, T., Seepersad, C. C. & Seering, W. 2018 The characteristics of innovative, mechanical products—10 years later. Journal of Mechanical Design 140 (8), 084501; doi:10.1115/1.4039851.CrossRefGoogle Scholar
Hyysalo, S. 2006 The role of learning-by-using in the design of health care technologies: a case study. Information Society 22, 89100; doi:10.1080/01972240600567196.CrossRefGoogle Scholar
Johnson, D. G., Genco, N., Saunders, M. N., Williams, P., Seepersad, C. C. & Hölttä-Otto, K. 2014 An experimental investigation of the effectiveness of empathic experience design for innovative concept generation. Journal of Mechanical Design 136 (5), 051009; doi:10.1115/1.4026951.CrossRefGoogle Scholar
Kelley, D. 2015 The field guide to human-centered design, online document http://www.designkit.org/resources/1. Accessed: 18.6.2022.Google Scholar
Kouprie, M. & Visser, F. S. 2009 A framework for empathy in design: stepping into and out of the user’s life. Journal of Engineering Design 20 (5), 437448; doi:10.1080/09544820902875033.CrossRefGoogle Scholar
Ku, G., Wang, C. S. & Galinsky, A. D. 2015 The promise and perversity of perspective-taking in organizations. Research in Organizational Behavior 35, 79102; doi:10.1016/j.riob.2015.07.003.CrossRefGoogle Scholar
Lauff, C. A., Knight, D., Kotys-Schwartz, D. & Rentschler, M. E. 2020 The role of prototypes in communication between stakeholders. Design Studies 66, 134; doi:10.1016/j.destud.2019.11.007.CrossRefGoogle Scholar
Li, J. & Hölttä-Otto, K. 2020 The influence of designers’ cultural differences on the empathic accuracy of user understanding. The Design Journal 23 (5), 779796; doi:10.1080/14606925.2020.1810414.CrossRefGoogle Scholar
Li, J., Surma-aho, A., Chang-Arana, Á. M. & Hölttä-Otto, K. 2021 Understanding customers across national cultures: the influence of national cultural differences on designers’ empathic accuracy. Journal of Engineering Design 32, 538558; doi:10.1080/09544828.2021.1928022.CrossRefGoogle Scholar
Luck, R. 2007 Learning to talk to users in participatory design situations. Design Studies 28 (3), 217242; doi:10.1016/j.destud.2007.02.002.CrossRefGoogle Scholar
McMullen, J. S. 2010 Perspective taking and the heterogeneity of the entrepreneurial imagination. In Advances in Austrian Economics, pp. 113144. Emerald Group; doi:10.1108/S1529-2134(2010)0000014009.Google Scholar
Micheli, P., Wilner, S. J. S., Bhatti, S. H., Mura, M. & Beverland, M. B. 2019 Doing design thinking: conceptual review, synthesis, and research agenda. Journal of Product Innovation Management 36 (2), 124148; doi:10.1111/jpim.12466.CrossRefGoogle Scholar
O’Connell Corcoran, K. & Mallinckrodt, B. 2000 Adult attachment, self-efficacy, perspective taking, and conflict resolution. Journal of Counseling and Development 78 (4), 473483; doi:10.1002/j.1556-6676.2000.tb01931.x.CrossRefGoogle Scholar
Oygür, I. 2018 The machineries of user knowledge production. Design Studies 54, 2349; doi:10.1016/j.destud.2017.10.002.CrossRefGoogle Scholar
Paton, B. & Dorst, K. 2011 Briefing and reframing: a situated practice. Design Studies 32 (6), 573587; doi:10.1016/j.destud.2011.07.002.CrossRefGoogle Scholar
Pedersen, S. 2020 Staging negotiation spaces: a co-design framework. Design Studies 68, 5881; doi:10.1016/j.destud.2020.02.002.CrossRefGoogle Scholar
Popovic, V. 2004 Expertise development in product design - strategic and domain-specific knowledge connections. Design Studies 25 (5), 527545; doi:10.1016/j.destud.2004.05.006.CrossRefGoogle Scholar
Redström, J. 2006 Towards user design? On the shift from object to user as the subject of design. Design Studies 27 (2), 123139; doi:10.1016/j.destud.2005.06.001.CrossRefGoogle Scholar
Saldaña, J. 2013 The Coding Manual for Qualitative Researchers, 2nd edn. SAGE Publications.Google Scholar
Sanders, E. B.-N. 1992 Converging perspectives: product development research for the 1990s. Design Management Journal (Former Series) 3 (4), 4954; doi:10.1111/j.1948-7169.1992.tb00604.x.CrossRefGoogle Scholar
Saunders, M. N., Seepersad, C. C. & Hölttä-Otto, K. 2011 The characteristics of innovative, mechanical products. Journal of Mechanical Design 133 (2), 021009; doi:10.1115/1.4003409.CrossRefGoogle Scholar
Schneider, J. & Hall, J. 2011 ‘Why most product launches fail’, online document https://hbr.org/2011/04/why-most-product-launches-fail. Accessed: 18.6.2022.Google Scholar
Scott, J. B. 2008 The practice of usability: teaching user engagement through service-learning. Technical Communication Quarterly 17 (4), 381412; doi: 10.1080/10572250802324929.CrossRefGoogle Scholar
Sharrock, W. & Anderson, B. 1994 The user as a scenic feature of the design space. Design Studies 15 (1), 518; doi:10.1016/0142-694X(94)90036-1.CrossRefGoogle Scholar
Smeenk, W., Tomico, O. & van Turnhout, K. 2016 A systematic analysis of mixed perspectives in empathic design: not one perspective encompasses all. International Journal of Design 10 (2), 3148.Google Scholar
Sugar, W. A. 2001 What is so good about user-centered design? Documenting the effect of usability sessions on novice software designers. Journal of Research on Computing in Education 33 (3), 235250; doi:10.1080/08886504.2001.10782312.CrossRefGoogle Scholar
Surma-aho, A. & Hölttä-Otto, K. 2022 Conceptualization and operationalization of empathy in design research. Design Studies 78, 101075; doi:10.1016/j.destud.2021.101075.CrossRefGoogle Scholar
Van der Linden, V., Dong, H. &. Heylighen, A. 2019 Tracing architects’ fragile knowing about users in the socio-material environment of design practice. Design Studies 63, 6591; doi:10.1016/j.destud.2019.02.004.CrossRefGoogle Scholar
Van Echtelt, F. E. A., Wynstra, F., van Weele, A. J. & Duysters, G. 2008 Managing supplier involvement in new product development: a multiple-case study. Journal of Product Innovation Management 25 (2), 180201; doi:10.1111/j.1540-5885.2008.00293.x.CrossRefGoogle Scholar
Walther, J., Miller, S. E. & Sochacka, N. W. 2017 A model of empathy in engineering as a core skill, practice orientation, and professional way of being. Journal of Engineering Education 106 (1), 123148; doi:10.1002/jee.20159.CrossRefGoogle Scholar
Walz, D. B., Elam, J. J. & Curtis, B. 1993 Inside a software design team: knowledge acquisition, sharing, and integration. Communications of the ACM 36 (10), 6377; doi:10.1145/163430.163447.CrossRefGoogle Scholar
Xue, H. & Desmet, P. M. A. 2019 Researcher introspection for experience-driven design research. Design Studies 63, 3764; doi:10.1016/j.destud.2019.03.001.CrossRefGoogle Scholar
Zaki, J. 2014 Empathy: a motivated account. Psychological Bulletin 140 (6), 16081647; doi:10.1037/a0037679.CrossRefGoogle ScholarPubMed
Zoltowski, C. B., Oakes, C. & Cardella, M. E. 2012 Students’ ways of experiencing human-centered design.. Journal of Engineering Education 101 (1), 2859; doi:10.1002/j.2168-9830.2012.tb00040.x.CrossRefGoogle Scholar
Figure 0

Table 1. Team composition and project brief of the four cases

Figure 1

Figure 1. Course structure, data collection points (in gray) and grade scores averaged from three sub-grades. Note that this visualisation omits time dedicated for an initial rehearsal project, holidays and dedicated documentation writing and presentation preparation.

Figure 2

Figure 2. Data sources and analysis process.

Figure 3

Figure 3. Hierarchical organisation of qualitative codes developed in this study.

Figure 4

Table 2. Design phase and performance categorisation

Figure 5

Table 3. Overarching patterns and developments in gathering data to form user perspectives

Figure 6

Table 4. Overarching patterns and developments in scoping and making sense of perspectives

Figure 7

Table 5. Overarching patterns and developments in using perspectives in creative processing