Hostname: page-component-586b7cd67f-gb8f7 Total loading time: 0 Render date: 2024-11-22T18:38:23.525Z Has data issue: false hasContentIssue false

Evaluating the feeling of control in virtual object translation on 2D interfaces

Published online by Cambridge University Press:  02 March 2023

Wenxin Sun
Affiliation:
Design School, Xi'an Jiaotong-Liverpool University, Suzhou, China Department of Civil Engineering and Industrial Design, University of Liverpool, Liverpool, UK
Mengjie Huang*
Affiliation:
Design School, Xi'an Jiaotong-Liverpool University, Suzhou, China
Chenxin Wu
Affiliation:
School of Advanced Technology, Xi'an Jiaotong-Liverpool University, Suzhou, China
Rui Yang
Affiliation:
School of Advanced Technology, Xi'an Jiaotong-Liverpool University, Suzhou, China
Ji Han
Affiliation:
Business School, University of Exeter, Exeter, UK
Yong Yue
Affiliation:
School of Advanced Technology, Xi'an Jiaotong-Liverpool University, Suzhou, China
*
Author for correspondence: Mengjie Huang, E-mail: [email protected]
Rights & Permissions [Opens in a new window]

Abstract

Computer-aided design (CAD) plays an essential role in creative idea generation on 2D screens during the design process. In most CAD scenarios, virtual object translation is an essential operation, and it is commonly used when designers simulate their innovative solutions. The degrees of freedom (DoF) of virtual object translation modes have been found to directly impact users’ task performance and psychological aspects in simulated environments. Little is known in the existing literature about the sense of agency (SoA), which is a critical psychological aspect emphasizing the feeling of control, in translation modes on 2D screens during the design process. Hence, this study aims to assess users’ SoA in virtual object translation modes on mouse-based, touch-based, and handheld augmented reality (AR) interfaces through subjective and objective measures, such as self-report, task performance, and electroencephalogram (EEG) data. Based on our findings in this study, users perceived a greater feeling of control in 1DoF translation mode, which may help them come up with more creative ideas, than in 3DoF translation mode in the design process; additionally, the handheld AR interface offers less control feel, which may have a negative impact on design quality and creativity, as compared with mouse- and touch-based interfaces. This research contributes to the current literature by analyzing the association between virtual object translation modes and SoA, as well as the relationship between different 2D interfaces and SoA in CAD. As a result of these findings, we propose several design considerations for virtual object translation on 2D screens, which may enable designers to perceive a desirable feeling of control during the design process.

Type
Research Article
Copyright
Copyright © The Author(s), 2023. Published by Cambridge University Press

Introduction

Computer-aided design (CAD) has always been popular among designers and plays an essential role in design inspiration and innovation. CAD helps designers to simulate and refine creative ideas in the design process; meanwhile, the user experience in CAD is associated with designers’ creative thinking and innovative solutions (Veisz et al., Reference Veisz, Namouz, Joshi and Summers2012). In addition, CAD operations, such as virtual object translation, have been explored for improving users’ task performance and psychological aspects, which could influence their creative idea generation (Alkemade et al., Reference Alkemade, Verbeek and Lukosch2017). In order to enhance the user experience in the design process, CAD features, such as different user interface types and manipulation modes, need to be investigated by designers and researchers.

It is still common for designers to employ CAD for work on 2D screens. Specifically, they are accustomed to the mouse-based interface, which is the most prevalent user interface type for virtual object manipulation on 2D screens. It has been suggested that CAD software can be transferred from desktop computers to mobile devices to improve accessibility (Lupinetti et al., Reference Lupinetti, Cabiddu, Giannini and Monti2019). Besides, recent advancements in augmented reality (AR) technology have made 2D screens to support interactions between real and virtual environments, allowing designers to manipulate virtual objects while simultaneously being aware of physical surroundings on handheld mobile devices (Goh et al., Reference Goh, Sunar and Ismail2019). On the one hand, 2D interfaces, regardless of whether they are mouse-based, touch-based, or handheld AR interfaces, offer similar features for virtual object manipulation with various input modalities (Moldovan et al., Reference Moldovan, Nicula, Pasca, Popa, Namburu, Oros and Brie2020). On the other hand, previous research highlighted the significance of comparing these 2D interfaces to better comprehend their differences (Besançon et al., Reference Besançon, Ynnerman, Keefe, Yu and Isenberg2021). These 2D interfaces, for instance, display different visual experiences to users; especially, the handheld AR interface differs substantially from other interface types.

Virtual object translation is a critical feature in most simulated environments, particularly in CAD scenarios (Alkemade et al., Reference Alkemade, Verbeek and Lukosch2017; Lupinetti et al., Reference Lupinetti, Cabiddu, Giannini and Monti2019). Relevant interactive techniques for displaying a 2D projection of a virtual object on 2D screens have been developed to overcome the challenge of controlling a virtual object through 2D inputs on mouse- and touch-based interfaces (Reisman et al., Reference Reisman, Davidson and Han2009). Researchers have primarily proposed concepts of degrees of freedom (DoF) separation and DoF integration for virtual object translation (Wang et al., Reference Wang, MacKenzie, Summers and Booth1998; Bonnici et al., Reference Bonnici, Akman, Calleja, Camilleri, Fehling, Ferreira and Rosin2019). There are three DoFs along the x-, y-, and z-axes in the simulated environment for moving virtual objects in three different directions, and two common modes for translation (1DoF mode and 3DoF mode) are proposed based on the concepts of DoF separation and DoF integration. The 1DoF translation mode allows users to translate a virtual object on one axis and then switch to another, whereas the 3DoF translation mode enables a virtual object to move simultaneously along all three axes (Sun et al., Reference Sun, Huang, Yang, Zhang, Wang, Han and Yue2020, Reference Sun, Huang, Yang, Han and Yue2021). The 1DoF and 3DoF translation modes are broadly applied on 2D interfaces and are particularly popular CAD features these days (Rogers et al., Reference Rogers, J, Frommel, Stamm and Weber2019; Wodehouse et al., Reference Wodehouse, Loudon and Urquhart2020). Prior research compared DoF integration and DoF separation to explore task performance when users control virtual contents on 2D screens (Lee et al., Reference Lee, Yang, Kim, Jo, Kim, Kim and Choi2009; Bai et al., Reference Bai, Lee and Billinghurst2012).

Prior literature attempted to improve the user experience, especially psychological aspects, in CAD scenarios. Sense of agency (SoA) is a psychological concept that has recently been introduced into the domain of human–computer interaction and has shown its potential in the design research area (Wen et al., Reference Wen, Kuroki and Asama2019). SoA is a well-developed concept that originated from neuroscience, emphasizing users’ feeling of control over their actions (Khanna et al., Reference Khanna, Pascual-Leone, Michel and Farzan2015; Seghezzi et al., Reference Seghezzi, Zirone, Paulesu and Zapparoli2019). In addition, different digital devices, such as desktops and tablets, have currently altered people's experiences of performing an action in the real world, affecting their feeling of control in everyday life. Several design guidelines have been proposed by scholars concentrating on the feeling of control for different user interfaces (Lukoff et al., Reference Lukoff, Lyngs, Zade, Liao, Choi, Fan and Hiniker2021). Recent research revealed that input modalities, such as mouse- and touch-based inputs, affect SoA, and it is claimed that users, who employ their own hands without external devices, perceive more feeling of control while performing actions like touching a button on 2D interfaces (Coyle et al., Reference Coyle, Moore, Kristensson, Fletcher and Blackwell2012; Bergstrom-Lehtovirta et al., Reference Bergstrom-Lehtovirta, Coyle, Knibbe and Hornbæk2018). It remains unclear, however, how different translation modes (1DoF and 3DoF) and different 2D interfaces (mouse-based, touch-based, and handheld AR interfaces) affect users’ SoA in CAD scenarios, which is a research gap that needs to be filled.

Translation modes and interface types in CAD

In the design process, CAD is extensively employed by designers to inspire and innovate their designs (Camba et al., Reference Camba, Contero, Johnson, Marcus, Møllenbach, Abascal and Sturdee2014). The use of CAD appears to provide significant benefits during the idea-generation process (Veisz et al., Reference Veisz, Namouz, Joshi and Summers2012; Hong and Economou, Reference Hong and Economou2022), while it is also widely used to refine creative concepts at a later stage of the design process (Wang et al., Reference Wang, Bai, Billinghurst, Zhang, Wei, Xu and Zhang2021). CAD is mainly based on 2D screens, such as desktops or mobile devices, which allow designers to manipulate virtual objects in real-time. Virtual object translation, one of the most classical object manipulations in CAD, has become essential in different interface types (Alkemade et al., Reference Alkemade, Verbeek and Lukosch2017; Lupinetti et al., Reference Lupinetti, Cabiddu, Giannini and Monti2019), regardless of whether they are mouse-based (Bonnici et al., Reference Bonnici, Akman, Calleja, Camilleri, Fehling, Ferreira and Rosin2019), touch-based (Goh et al., Reference Goh, Sunar and Ismail2019), or handheld AR interfaces (Moldovan et al., Reference Moldovan, Nicula, Pasca, Popa, Namburu, Oros and Brie2020). In related work, scholars developed CAD software, which allows users to control virtual objects on both desktops and mobile tablets for user experience enhancement (Atilola and Linsey, Reference Atilola and Linsey2015). In addition, scholars have attempted to transfer CAD to AR environments and have examined the performance of designers and their subjective feelings when controlling virtual objects (Lupinetti et al., Reference Lupinetti, Cabiddu, Giannini and Monti2019).

People have applied the traditional mouse to control a virtual object in CAD for several decades. Users manipulate a virtual object by pressing the mouse button and moving the mouse cursor positioned on top of the 2D projection of the desired object on 2D screens. In order to overcome the challenge of moving a 3D object on 2D screens, relevant techniques have been developed for transforming 2D points to 3D coordinates with support for the third dimension (z-axis) on 2D screens (Khan and Tunçer, Reference Khan and Tunçer2019). For example, prior research proposed the Rotate'N Translate (RNT) algorithm to support 2D input modalities for 3D object transformations on 2D screens (Goh et al., Reference Goh, Sunar and Ismail2019). Scholars explored two basic virtual object translation modes, DoF separation and DoF integration (Alibali, Reference Alibali2005; Bonnici et al., Reference Bonnici, Akman, Calleja, Camilleri, Fehling, Ferreira and Rosin2019), and they are commonly used in mouse-based CAD (Kim and Han, Reference Kim and Han2019). Moreover, previous studies presented that DoF separation, rather than DoF integration, can be employed as a translation mode to improve task performance (Alibali, Reference Alibali2005; Lupinetti et al., Reference Lupinetti, Cabiddu, Giannini and Monti2019), which is the reason why scholars developed various interactive approaches based on DoF separation (Hancock et al., Reference Hancock, Ten Cate and Carpendale2009; Reisman et al., Reference Reisman, Davidson and Han2009; Bonnici et al., Reference Bonnici, Akman, Calleja, Camilleri, Fehling, Ferreira and Rosin2019).

The use of finger gestures, rather than the traditional mouse, is more flexible, allowing for single-contact, multi-contact, and two-hand modalities of virtual object translation in CAD (Moldovan et al., Reference Moldovan, Nicula, Pasca, Popa, Namburu, Oros and Brie2020). Touchscreen devices have the same barrier as desktops, and then related techniques, such as the RNT algorithm (Goh et al., Reference Goh, Sunar and Ismail2019), can be used to address the challenge of manipulating a virtual object on 2D screens. As a result, DoF separation and DoF integration are investigated as two basic virtual object translation modes on the touch-based interface (Hancock et al., Reference Hancock, Ten Cate and Carpendale2009; Reisman et al., Reference Reisman, Davidson and Han2009). As an example of DoF separation and DoF integration, scholars developed a mobile CAD system to increase its accessibility from traditional desktops to mobile devices (Lupinetti et al., Reference Lupinetti, Cabiddu, Giannini and Monti2019). Moreover, DoF separation can be introduced into touch-based interactive techniques to enhance task performance when compared with DoF integration (Nanjappan et al., Reference Nanjappan, Shi, Liang, Xiao, Lau and Hasan2019; Dong et al., Reference Dong, Piumsomboon, Zhang, Clark, Bai and Lindeman2020).

Handheld AR, which refers to AR systems built on handheld mobile devices, is particularly suited for implementation with touch-based interactive techniques (Moldovan et al., Reference Moldovan, Nicula, Pasca, Popa, Namburu, Oros and Brie2020). The handheld AR interface, as distinguished from the mouse- and touch-based interfaces, enables users to interact with virtual objects using finger gestures on 2D screens integrated with physical surroundings (Bai et al., Reference Bai, Lee and Billinghurst2012). Meanwhile, individuals adjust their vision by moving around in the physical space and observing virtual objects shown on the 2D screen (Goh et al., Reference Goh, Sunar and Ismail2019; Moldovan et al., Reference Moldovan, Nicula, Pasca, Popa, Namburu, Oros and Brie2020). Besides, CAD can be implemented in AR environments, and researchers have investigated users’ task performance and subjective experience in virtual object manipulation tasks (Lupinetti et al., Reference Lupinetti, Cabiddu, Giannini and Monti2019). Related literature indicated that these AR techniques have the potential to improve user experience by providing nature interactions and eliminating spatial issues (Zhou et al., Reference Zhou, Duh and Billinghurst2008; Rekimoto, Reference Rekimoto2014). In addition, DoF separation and DoF integration should be adopted in handheld AR settings (Louis et al., Reference Louis, Troccaz, Rochet-Capellan, Hoyek and Bérard2020; Su et al., Reference Su, Sunar and Ismail2020, Reference Sun, Huang, Wu and Yang2022a, Reference Sun, Huang, Wu and Yang2022b), because they tackle the issue of interacting with virtual objects combined with the physical world (Lee et al., Reference Lee, Yang, Kim, Jo, Kim, Kim and Choi2009; Bai et al., Reference Bai, Lee and Billinghurst2012).

Prior literature contrasted input modalities and underlined the differences between mouse- and touch-based inputs, as well as the benefits and drawbacks of these inputs in virtual object manipulation tasks (Besançon et al., Reference Besançon, Ynnerman, Keefe, Yu and Isenberg2021). Prior studies compared mouse- and touch-based inputs in matching tasks by analyzing self-report and task performance (Tuddenham et al., Reference Tuddenham, Kirk and Izadi2010; Yu et al., Reference Yu, Svetachov, Isenberg, Everts and Isenberg2010). Researchers suggested that finger gestures are more natural user interactions than the traditional mouse (Guerino and Valentim, Reference Guerino and Valentim2020). Previous research indicated that finger gestures reduce manipulation time while allowing users to accomplish manipulation tasks more efficiently and accurately (Wang et al., Reference Wang, MacKenzie, Summers and Booth1998; Knoedel and Hachet, Reference Knoedel and Hachet2011). Current literature also investigated the influence of interactive inputs on subjective feelings, revealing that finger gestures result in better overall subjective assessments during 3D video games (Watson et al., Reference Watson, Hancock, Mandryk and Birk2013). Related studies found similar findings that touch-based inputs are preferable to mouse-based inputs in terms of manipulation time, task accuracy, and subjective satisfaction (Knoedel and Hachet, Reference Knoedel and Hachet2011; Drucker et al., Reference Drucker, Fisher, Sadana, Herron and Schraefel2013). On the other hand, prior research demonstrated that mouse- and touch-based inputs are equally well-suited for task performance of virtual object manipulation (Besançon et al., Reference Besançon, Ynnerman, Keefe, Yu and Isenberg2021), whereas researchers found that finger gestures lead to less precision than the traditional mouse in matching tasks (Sun et al., Reference Sun, Huang, Wu and Yang2022a). It is critical to study CAD-related operations to better understand how to improve the user experience for designers and researchers.

Sense of agency

It is essential to consider the feeling of control under continuous actions as an essential index of user experience, since the ability to control virtual objects is essential to enhance task performance and psychological aspects in CAD. SoA, which originated from neuroscience, refers to users’ subjective feeling of control over their own actions (Seghezzi et al., Reference Seghezzi, Zirone, Paulesu and Zapparoli2019), such as CAD operations. In particular, SoA is important in cognitive development since it directly reflects people's feeling of control (Gallagher, Reference Gallagher2000). SoA makes people feel like “they accomplished something” rather than “the system did something” (Wen et al., Reference Wen, Kuroki and Asama2019; Caspar et al., Reference Caspar, De Beir, Lauwers, Cleeremans and Vanderborght2021). SoA is linked to people's voluntary actions, which are regarded as major markers of human behavior and are activated by external factors (Jeunet et al., Reference Jeunet, Albert, Argelaguet and Lécuyer2018), and people have a subjective feeling of control over their voluntary actions when using various input modalities in simulated environments (Gallagher, Reference Gallagher2000). Based on these theoretical foundations, design guidelines are proposed to improve users’ feeling of control over various user interfaces for designers and researchers (Brewer and Kameswaran, Reference Brewer and Kameswaran2018; Lukoff et al., Reference Lukoff, Lyngs, Zade, Liao, Choi, Fan and Hiniker2021).

There are two primary approaches, subjective and objective judgments, which have been proposed to evaluate SoA in a laboratory experiment (Wen et al., Reference Wen, Yamashita and Asama2017; Wang et al., Reference Wang, Huang, Yang, Liang, Han and Sun2022). Self-report is frequently employed as a subjective measure to evaluate SoA. After completing experimental tasks, users need to recollect their own experiences and then make subjective judgments to score each item. Their self-report may lead to individual rating bias; thus, sufficient sample size is required to eliminate personal differences (Lukoff et al., Reference Lukoff, Lyngs, Zade, Liao, Choi, Fan and Hiniker2021). Additionally, prior research investigated SoA by employing self-report along with task performance, which can be applied to assess SoA from another perspective (Kang et al., Reference Kang, Im, Shim, Nahab, Park, Kim and Hallett2015; Gorantla et al., Reference Gorantla, Tedesco, Chandanathil, Maity, Bond, Lewis and Millis2020). Unlike subjective self-report, neurophysiological responses, such as EEG signals, are adopted as objective judgments to measure SoA. Rather than recalling experiences in self-report, EEG data are captured to assess users’ feeling of control directly (Wen et al., Reference Wen, Kuroki and Asama2019; Sun et al., Reference Sun, Huang, Wu and Yang2022b), employing EEG spectral power, EEG phase coherence, and EEG microstate. Due to the development of artificial intelligence for signal processing (Wan et al., Reference Wan, Yang, Huang, Zeng and Liu2021; Yu et al., Reference Yu, Yang and Huang2022), EEG becomes a valuable and popular tool for evaluating the user's experience when interacting with the product or system. Based on EEG analysis, the alpha band has been proven to be highly related with SoA, not only alpha power (Zito et al., Reference Zito, Wiest and Aybek2020; Nataraj and Sanford, Reference Nataraj and Sanford2021), but also alpha coherence (Mathewson et al., Reference Mathewson, Lleras, Beck, Fabiani, Ro and Gratton2011). However, little was known in the existing literature on SoA evaluation using EEG data during continuous manipulation (Wen et al., Reference Wen, Yamashita and Asama2017), such as CAD operations.

In design cognition studies, EEG is widely used to investigate the cognitive activities of users during the entire design process, thus overcoming the limitations of subjective ways based on self-reports (Abraham, Reference Abraham2016; Benedek, Reference Benedek, Jung and Vartanian2018). Researchers use EEG signals to detect changes in brain activity during continuous actions in the design process, and they use this information to explore how designers generate creative ideas (Liu et al., Reference Liu, Li, Xiong, Cao and Yuan2018; Jia and Zeng, Reference Jia and Zeng2021; Li et al., Reference Li, Becattini and Cascini2021; Vieira et al., Reference Vieira, Benedek, Gero, Li and Cascini2022). In particular, prior literature established that the alpha band is clearly associated with design inspiration and design innovation in the frontal lobe of the brain (Cao et al., Reference Cao, Zhao and Guo2021). In related studies, neurophysiological activations, especially alpha and beta bands, are highly associated with work efficiency and quality in the problem-solving stage (Li et al., Reference Li, Becattini and Cascini2021; Vieira et al., Reference Vieira, Benedek, Gero, Li and Cascini2022). It has been found that alpha activation in brain activity is associated with design outcomes, such as creative idea generation in the entire design process (Horvat et al., Reference Horvat, Martinec, Lukačević, Perišić and Škec2022; Vieira et al., Reference Vieira, Benedek, Gero, Li and Cascini2022). Researchers previously considered designers’ cognitive load as a crucial indicator of design inspiration during the design process (Liu et al., Reference Liu, Li, Xiong, Cao and Yuan2018; Jia and Zeng, Reference Jia and Zeng2021). In addition, cognitive load plays an important role in determining SoA (Howard et al., Reference Howard, Edwards and Bayliss2016). Hence, as an important element of the user experience, SoA may have a significant influence on the design process in CAD.

Research questions and approach

We focus primarily on uses’ feeling of control, namely SoA, in two translation modes (1DoF and 3DoF) on three interface types (mouse-based, touch-based, and handheld AR) in CAD scenarios. Due to the research gap stated previously, the following research questions are proposed in this research:

  • Research Question 1 (Q1): How do virtual object translation modes (1DoF and 3DoF) affect users’ feeling of control on 2D screens in CAD?

  • Research Question 2 (Q2): What influence do different interface types (mouse-based, touch-based, and handheld AR) have on users’ feeling of control when moving virtual objects in CAD?

In order to address these research questions, this study applies subjective and objective measures, including self-report, task performance, and electroencephalogram (EEG) data. Among them, subjective self-report along with task performance are frequently adopted measures for accessing SoA in the literature (Roth and Latoschik, Reference Roth and Latoschik2019), and they can also reflect the creative idea generation of designers during the design process (Huang, Reference Huang2005). EEG data have started to show its potential for investigating SoA with the EEG spectral power and phase coherence, but more supporting evidence is required in further research (Haggard, Reference Haggard2005; Kang et al., Reference Kang, Im, Shim, Nahab, Park, Kim and Hallett2015). In addition, EEG data is widely applied in design to explore creative idea generation in the design process (Abraham, Reference Abraham2016; Benedek, Reference Benedek, Jung and Vartanian2018). Three clustering techniques, including K-means, principal component analysis (PCA), and independent component analysis (ICA), are also adopted to extract EEG microstate labels as principal features to further comprehend the entire EEG dataset (Khanna et al., Reference Khanna, Pascual-Leone, Michel and Farzan2015; Von Wegner et al., Reference Von Wegner, Knaut and Laufs2018) at the first step. Finally, this study concludes design considerations for application developments focusing on SoA in virtual object translation on 2D interfaces during the design process. There are several main contributions of this study:

  • This study generates novel insight into the association between translation modes and users’ feeling of control on 2D interfaces for the first time, concluding that the 1DoF translation mode is preferable on 2D interfaces when a stronger feeling of control is expected in CAD.

  • This study enhances our understanding of the relationship between different 2D interfaces and users’ feeling of control in virtual object translation tasks for the first time, revealing that the handheld AR interface provides a lower feeling of control to be differentiated from the other 2D interfaces in CAD.

  • This research illustrates that the alpha power is active in the frontal brain region closely associated with SoA based on extracted EEG microstate labels through three clustering algorithms for the first time. Moreover, this research offers more supporting evidence for investigating SoA using EEG data from a technical point of view and providing insights into SoA evaluation through the integration of subjective and objective data.

  • This paper makes an important contribution to conclude design considerations which can help scholars and designers identify appropriate translation modes and 2D interface types for SoA improvements in CAD. Meanwhile, if CAD features, such as translation modes and 2D interface types, can lead to a preferable controlling experience, designers may have a higher level of design quality and creativity.

Methods

Participants

Thirty-two subjects (16 males and 16 females) were recruited for the user study with an average age of 23 ± 5 years. According to the literature, scholars have mentioned that in most cases, a sample size of 25 or 30 observations is adequate, which is fairly common across statistics (Chang et al., Reference Chang, Huang and Wu2006, Reference Chang, Wu, Ho and Chen2008). In the design research area, some studies have considered about 20 subjects in the phase of experimental design (Acharya and Chakrabarti, Reference Acharya and Chakrabarti2020; Cun et al., Reference Cun, Mo, Chu, Yu, Zhang, Fan and Chen2021; Fillingim et al., Reference Fillingim, Shapiro, Reichling and Fu2021; Kwon et al., Reference Kwon, Huang and Goucher-Lambert2022; Pejic and Pejic, Reference Pejic and Pejic2022; Sun et al., Reference Sun, Zhang, Li, Zhou and Zhou2022c). Half of the participants had prior experience in employing CAD software, and others had limited knowledge about CAD; meanwhile, all participants were familiar with operations on mouse- and touch-based interfaces, but few had prior experience manipulating virtual objects in simulated settings through the handheld AR interface. They were all given general information about the study and needed to sign a consent form before the experiment. All of them were right-handed and used their right hand to complete each task. Because participants could decide whether to record their EEG data, 20 of them voluntarily wore an EEG head cap during the experiment to collect brain signals, and they were obliged to maintain their hair clean and abstain from alcohol and caffeine before the experiment. All 32 participants filled out a questionnaire after completing tasks for each experimental setting. They could discontinue the experiment at any time for any reason since they all agreed to participate willingly. All participants were required to complete experimental tasks within the same experimental settings based on 2D screens. After completing all experimental tasks, each participant would obtain an appreciation gift. This work has received ethical approval (No. 20-03-08) from the University Ethics Committee. This ethical approval supports us to conduct this research and collect user data, such as self-report, performance, and EEG data.

Translation modes

As shown in Figure 1, this study concentrated on 1DoF and 3DoF translation modes, which are implemented based on the concepts of DoF separation and DoF integration respectively. Both translation modes apply well-developed interactive techniques (with an open-source tool, Threejs, based on the Chrome browser) to support 2D input modalities for 3D object transformations in simulated environments on desktops and mobile tablets. Specifically, users manipulate a virtual object with mouse- and touch-based inputs by observing and controlling the 2D projection of the virtual object in the simulated world based on 2D screens.

Fig. 1. Virtual object translation modes: (i) the 1DoF mode with three arrows, and (ii) the 3DoF mode with one white dot.

Virtual object translation includes manipulating three DoFs along the x-, y-, and z-axes in simulated environments. As displayed in Figure 1i, users employ the 1DoF translation mode to move the virtual object along one of the three arrows (representing the x-, y-, and z-axes separately) and then switch to another arrow to translate the virtual object in another direction. Figure 1ii shows that users move the virtual object through the white dot in the 3DoF translation mode by adjusting the object's position across all three axes simultaneously in the current 2D screen view. In both translation modes, the 2D projection of the virtual object is calculated to 3D coordinates in the simulated settings at each moment of virtual object movement, which supports users moving the virtual object in simulated environments and observing the virtual object on 2D screens. In this study, 1DoF and 3DoF translation modes were implemented on the mouse-based, touch-based, and handheld AR interfaces, respectively.

Translation modes on 2D interface types

The simulated environments were developed to support 1DoF and 3DoF translation modes on three 2D interface types, including mouse-based, touch-based, and handheld AR interfaces. The details of virtual object translation modes on these 2D interfaces are explained below.

Mouse-based interface

On the mouse-based interface, users apply a traditional mouse to move a virtual object by pressing the mouse button and moving the mouse cursor simultaneously on a 2D screen. In this study, the simulated environment based on the mouse-based interface was implemented with 1DoF and 3DoF translation modes for users controlling virtual objects.

  • 1DoF translation mode. Users apply the mouse cursor to select one of the three arrows on the virtual object. Following selection, they move the virtual object with the mouse cursor along the x-, y-, or z-axes separately. Users release the mouse button when the virtual object reaches the desired location.

  • 3DoF translation mode. Users adopt the mouse cursor to select the virtual object by clicking the white dot. They move the mouse cursor to translate the virtual object on the currently displayed screen view. Users release the mouse button when the virtual object reaches its destination.

Additionally, users need to control the virtual environment (except for virtual objects) in order to observe virtual objects and move them to the desired location during virtual object translation. Specifically, the mouse left button is related to the rotation of the scene view, the mouse right button is used to control the panning of the scene view, and the mouse wheel is applied to control the view zoom.

Touch-based interface

On the touch-based interface, users move a virtual object with the one-finger gesture, instead of the traditional mouse, by touching and moving the finger on a 2D screen. The simulated environment was implemented with 1DoF and 3DoF translation modes based on the touch-based interface.

  • 1DoF translation mode. Users adopt the one-finger gesture to select one of the three arrows in a specific direction and then move the finger to translate the virtual object along the axis. When the finger leaves the 2D screen, the virtual object stops.

  • 3DoF translation mode. Users move the virtual object with the one-finger gesture by tapping the white dot and moving the finger. The virtual object is translated on the current scene view. Users release their fingers from the 2D screen to stop moving the virtual object.

Besides, interactions with the simulated environment (except for virtual objects) are considered on the touch-based interface as well. Specifically, the one-finger gesture is associated with the view rotation, the two-finger swiping stands for the view panning, and the two-finger pinching is related to the view zoom.

Handheld AR interface

On the handheld AR interface, users adopt the same one-finger gesture to control a virtual object on a 2D screen of the handheld mobile device. Different from the other two interfaces, the handheld AR interface displays virtual objects combined with physical surroundings on the AR-enhanced 2D screen. 1DoF and 3DoF modes were implemented based on the handheld AR interface.

  • 1DoF translation mode. Users employ the one-finger gesture to select one of three arrows, and they move the finger across the screen to translate the object on the x-, y-, or z-axes separately. Users can stop moving the virtual object by releasing their fingers.

  • 3DoF translation mode. Users adopt the one-finger gesture to press the white dot and then move the finger to translate the virtual object across all axes synchronously on the 2D screen. The virtual object is stopped from moving when the finger leaves the 2D screen.

In contrast to other 2D interfaces, users are not required to interact with the simulated environment using finger gestures. They walk around in the physical space, rotating, panning, and zooming the scene view with the movement of the handheld mobile device.

Procedure and instrumentation

As illustrated in Figure 2, the experiment in this research consisted of three sessions based on different simulated settings (Session 1: mouse-based desktop, Session 2: touch-based tablet, and Session 3: handheld AR).

Fig. 2. The experimental settings based on (i) Session 1: mouse-based desktop, (ii) Session 2: touch-based tablet, and (iii) Session 3: handheld AR.

Before this experiment, participants were given a short introduction about the order of the three sessions, the operations in these simulated environments, and the target of translation tasks. Participants also had a chance to practice 1DoF and 3DoF translation modes on different 2D interface types while receiving oral instructions about how to move virtual objects with different translation modes in simulated environments. Then, participants began to conduct translation tasks in each session after 20 of them wore the 32-channel EEG head cap (EMOTIV EPOC Flex) all by themselves. The three sessions, as well as two virtual object translation modes in each session, proceeded in a counterbalanced order.

  • Session 1. The experimental settings included (a) a desktop with a 22-inch LCD screen, (b) a traditional mouse, and (c) an EEG head cap (only for participants who voluntarily agreed to record the EEG data), as shown in Figure 2i. Participants applied the traditional mouse to complete three translation tasks for each of the two translation modes (1DoF and 3DoF modes) in a counterbalanced order. These three translation tasks were designed with different shapes of virtual objects as well as the different distances between the initial and target position to eliminate the cross-tasking conflict. In each task, one moveable object and one target object were placed in the simulated environment on the 2D screen, as shown in Figure 3. Participants translated the movable object from its original position to the target position where the target object was located to make the movable object overlap with the target object as precisely as possible.

  • Session 2. The experimental settings consisted of (a) a 12.7-inch touch-based tablet and (b) an EEG head cap (only for participants who voluntarily agreed to record the EEG data), as displayed in Figure 2ii. The initial configuration of each translation task was the same as in Session 1, as shown in Figure 3. Participants completed three tasks with each of the two translation modes (1DoF and 3DoF modes) using the one-finger gesture on the touch-based tablet. Participants should also strive to achieve goals by manipulating the virtual object as precisely as possible.

  • Session 3. The experimental settings included (a) a 12.7-inch touch-based tablet with an AR application installed and (b) an EEG head cap (only for participants who voluntarily agreed to record the EEG data), as illustrated in Figure 2iii. The initial configuration was consistent with the other two sessions, as shown in Figure 3. Participants translated the movable object to the intended position with each of two translation modes (1DoF and 3DoF modes) using the one-finger gesture as precisely as possible. Unlike the other sessions, participants in Session 3 could walk around in a 4 m × 3.5 m physical space while holding the tablet and conducting translation tasks on the touch-based tablet.

Fig. 3. The initial configuration of each translation task, including (a) one moveable object and (b) one target object, on mouse-based, touch-based, and handheld AR interfaces.

After each session, participants were required to complete the Agency Questionnaire (AQ) for subjective analysis. During the experiment, the program recorded performance data automatically. Twenty participants who agreed to submit EEG data were required to wear the EEG head cap for EEG data recording. These measures are described in detail below.

Measures

Subjective measure

This study examined subjective measures across two virtual object translation modes (1DoF and 3DoF modes) and three 2D interface types (mouse-based, touch-based, and handheld AR interfaces). After each session, participants were asked to complete a modified four-item AQ, as described in Table 1, which is widely adopted in the relevant literature (Roth and Latoschik, Reference Roth and Latoschik2019). They rated each item on a 7-point Likert scale ranging from 1 (“not responsive”) to 7 (“extremely responsive”). The means and standard deviations of SoA scores were calculated with equal weights for each item. Before applying the statistical method to the self-reported data, the distribution-freeness of the data was checked; therefore, a nonparametric approach should be employed to compare the different groups. Then, the Kruskal–Wallis test was used to compare SoA scores across translation modes and 2D interface types, which is suitable for interval or ratio measurement scales.

Table 1. The four-item agency questionnaire (AQ)

Performance measure

This study assessed users’ SoA in two virtual object translation modes (1DoF and 3DoF modes) and on three 2D interface types (mouse-based, touch-based, and handheld AR interfaces) by employing performance measure as another measure along with self-report. This research automatically recorded manipulation time by the programs for efficiency analysis. The means and standard deviations of manipulation time were then calculated for translation modes and 2D interfaces. Before applying the statistical method, we confirmed that the performance data were distribution-free, so a non-parametric method should be selected to compare different groups. Then, the Wilcoxon signed-rank test was used to assess statistically significant variations in manipulation time across translation modes and 2D interface types.

EEG measure

The raw EEG data were sampled at 128 Hz using a 32-channel EEG cap with saline sensors. MATLAB Simulink (R2020b) with the corresponding toolbox was used to process and analyze EEG data. Following the standard 10-20 EEG location system, electrodes were placed at 32 channels (Fp1, Fp2, F7, F3, F4, F8, FC5, FC1, FCz, FC2, FC6, C5, C3, C1, Cz, C2, C4, C6, CP5, CP3, CP1, CPz, CP2, CP4, CP6, P7, P3, Pz, P4, P8, O1, and O2), as shown in Figure 4. An open-source toolkit, EEGLAB based on MATLAB, was employed to complete EEG data processing for each participant. The specific data processing processes included: (i) the low bandpass filter was used to achieve EEG signals in the range of 1 to 60 Hz; (ii) ICA was applied to remove artifacts such as electromyograms; (iii) the baseline of each participant's EEG signals was eliminated; (iv) the outliers were removed by Hampel filter; and then (v) alpha (8–12 Hz), beta (13–30 Hz), and gamma (31–50 Hz) bands were extracted by wavelet decomposition analysis. Then, the EEG microstate, spectral power, and phase coherence were applied to analyze EEG data.

Fig. 4. Channel locations of the EEG head cap.

As the prior research showed that the frontal brain region is closely associated with SoA, the EEG microstate analysis in this study was used to investigate alpha power changes in this brain region for initially exploring the EEG dataset. EEG microstate labels were compressed as key features to represent alpha power changes of the entire EEG dataset, which were displayed by brain topographic maps (EEG microstate maps) through three clustering algorithms (modified K-means, PCA, and Fast-ICA). The goal of using three algorithms was to better observe the results and to choose methods that were more suitable for this study. Figure 5 shows that pre-processed EEG data were adopted to calculate the global field power (GFP) and its maxima; meanwhile, clustering algorithms were employed to generate EEG microstate maps. The algorithms were then quantified with two statistics (GEV and empirical entropy). The following are the three algorithms in detail:

  • Modified K-means. The modified K-means technique, one of the classical clustering algorithms in the literature, is an EEG-based stochastic clustering algorithm. An EEG data array (time points × the number of channels) was the initial input. The important values as final outputs consisted of the indices of the GFP peaks, the GEV of microstate labels, the empirical entropy of microstate labels, and the minimum value across all K-means runs. In this research, EEG microstate labels were displayed by microstate maps, which showed alpha power variations in different brain regions. After running the modified K-means algorithm, the findings may be accepted, or various runs may provide alternative outcomes.

  • PCA. The PCA technique is often encountered in clustering algorithms whose statistical interpretation is apparent. PCA-based clustering is a deterministic technique, which makes it beneficial when the repeatability of findings is the main objective (Alicja and Maciej, Reference Alicja and Maciej2022). The algorithm received the same input and also obtained important values as outputs, such as the CEV and empirical entropy of microstate labels. Finally, four microstate labels were obtained to show the principal features displayed by microstate maps.

  • Fast-ICA. The Fast-ICA algorithm is a frequently employed alternative technique for analyzing EEG data. Particularly, ICA theory corresponds to a model in which EEG topographies are a linear mixing of an unknown collection of source topographies. Similar to the previous two algorithms, EEG data were the initial input, and several values (CEV and the empirical entropy) were the final outputs. Finally, microstate labels were extracted as principal features of the entire EEG dataset to display alpha power changes with microstate maps.

Fig. 5. The detailed process of EEG microstate analysis.

After extracting EEG labels to further observe whether the alpha power was active in the frontal brain regions highly associated with SoA, the other two methods (spectral power and phase coherence) were applied to explore SoA with EEG data in detail.

The EEG spectral power of each electrode was calculated with the short-time Fourier transform (STFT) in the alpha, beta, and gamma bands. The result was averaged over the frequency power in the 2 s interval window and 1 s overlap window of EEG signals. The brain topographic maps were adopted to display changes in frequency band power directly. The Wilcoxon signed-rank test, which is commonly used in related literature for EEG analysis (Kang et al., Reference Kang, Im, Shim, Nahab, Park, Kim and Hallett2015), compared the EEG spectral power (alpha, beta, and gamma power) across translation modes and 2D interface types. This is because the Wilcoxon signed-rank test has been demonstrated to be applicable to EEG data, providing a powerful tool for identifying differences between two small samples or two large samples (Aris et al., Reference Aris, Jalil, Bani, Kaidi and Muhtazaruddin2018).

The EEG phase coherence was computed with inter-site phase clustering (ISPC) to reflect brain connectivity between different brain regions in the alpha, beta, and gamma bands. Then, the means of phase coherence were calculated for translation modes and 2D interfaces. The brain connectivity maps were applied to show changes in phase coherence. This research adopted the Wilcoxon signed-rank test, which is also used to analyze different groups of EEG phase coherences (Kang et al., Reference Kang, Im, Shim, Nahab, Park, Kim and Hallett2015), to compare the EEG phase coherence (alpha, beta, and gamma coherence) across translation modes and 2D interfaces.

Results

Self-report

As shown in Table 2, this study compared SoA scores between two virtual object translation modes (1DoF and 3DoF modes). Generally, the 1DoF translation mode obtained a higher SoA score when participants completed tasks on 2D screens, regardless of whether it was the mouse-based, touch-based, or handheld AR interface. Data from subjective self-report demonstrated significant differences in SoA scores between 1DoF and 3DoF translation modes on the mouse- and touch-based interfaces. Participants rated the 1DoF mode with higher SoA scores than the 3DoF mode (χ 2 = 4.260, P = 0.039) when applying the mouse cursor to conduct translation tasks on the 2D screen. Similarly, participants gave the 1DoF mode a higher SoA score than the 3DoF mode (χ 2 = 4.080, P = 0.044) when completing translation tasks on the mouse-based interface. Although there was no significant difference in SoA scores between 1DoF and 3DoF translation modes (χ 2 = 2.560, P = 0.110) on the handheld AR interface, a decreasing trend in SoA scores was observed from 1DoF to 3DoF mode.

Table 2. Mean and standard deviation of SoA scores in two translation modes on different 2D interfaces

Furthermore, this study analyzed SoA scores among three 2D interface types (mouse-based, touch-based, and handheld AR interfaces) in virtual object translation (on average of two translation modes), as displayed in Table 2. It was found that the mouse-based interface obtained the highest SoA score on average, while the handheld AR interface received a much lower SoA score than the other two interfaces. Participants rated the mouse-based interface with higher SoA scores than the handheld AR interface in both 1DoF (χ 2 = 7.510, P = 0.006) and 3DoF (χ 2 = 5.420, P = 0.020) translation modes with significant differences. Although there was no statistically significant difference between the touch-based and handheld AR settings, participants gave the touch-based interface much higher SoA scores than the handheld AR interface in both 1DoF (χ 2 = 2.670, P = 0.102) and 3DoF translation modes (χ 2 = 2.390, P = 0.123). In addition, the mouse-based interface achieved a slightly higher SoA score than the touch-based interface in both 1DoF (χ 2 = 0.660, P = 0.417) and 3DoF (χ 2 = 0.720, P = 0.396) translation modes without significant difference.

Task performance

This study compared manipulation time across two virtual object translation modes (1DoF and 3DoF modes). As demonstrated in Table 3, efficiency analysis showed that the 1DoF translation mode achieved less manipulation time as measured by the means and standard deviations compared with the 3DoF translation mode, although there was no statistically significant difference between 1DoF and 3DoF modes on the mouse-based (χ 2 = 0.180, P = 0.668), touch-based (χ 2 = 0.050, P = 0.827), and handheld AR (χ 2 = 2.840, P = 0.092) interfaces. In other words, the 1DoF translation mode may be performed more efficiently than the 3DoF translation mode on these 2D interfaces.

Table 3. Mean and standard deviation of manipulation time in two translation modes on different 2D interfaces

Additionally, this study analyzed the differences in manipulation time on three 2D interface types (mouse-based, touch-based, and handheld AR interfaces). In general, as shown in Table 3, based on efficiency analysis, participants spent approximately three times as much manipulation time in handheld AR settings as they did in the other two sessions, suggesting that the handheld AR interface may perform less efficiently than other 2D interfaces. Specifically, in 1DoF mode, participants spent much more time conducting translation tasks on the handheld AR interface than on the mouse-based (χ 2 = 44.260, P < 0.001) and touch-based interface (χ 2 = 44.260, P < 0.001). During translation tasks in 3DoF mode, the handheld AR interface performed less efficiently than the mouse-based (χ 2 = 43.290, P < 0.001) and touch-based interface (χ 2 = 43.680, P < 0.001). In addition, there was no significant difference in manipulation time between the mouse-based interface and the touch-based interfaces in both 1DoF (χ 2 = 1.930, P = 0.165) and 3DoF (χ 2 = 0.180, P = 0.668) translation modes.

EEG data

EEG microstate analysis with clustering algorithms

Figure 6 demonstrates the outcomes of applying K-means, PCA, and ICA clustering algorithms to EEG microstate maps. Adopting K-means, EEG microstate maps A, B, C, and D revealed positive and negative alpha power in the frontal and parietal regions throughout virtual object translation tasks. The PCA algorithm revealed a similar trend, and positive and negative peaks in alpha power were identified in the frontal and parietal brain regions during this experiment. As displayed in EEG microstate maps A, B, C, and D, the ICA algorithm extracted EEG labels displaying variations in alpha power over the whole brain experienced by participants in virtual object translation tasks on 2D screens.

Fig. 6. EEG microstate maps obtained by different clustering algorithms (K-means, PCA, and ICA algorithms).

These clustering algorithms were estimated with GEV and empirical entropy (H). The GEV of K-means was around 0.687% revealing that the EEG microstate accounted for 68.7% of the temporal variation of the electrical potential. Among the three approaches, the K-means algorithm yielded the highest GEV. The GEV of the PCA algorithm was approximately 0.618, representing 61.8% of the spatial variation in the electrical potential throughout time. The ICA algorithm contributed the smallest GEV (0.203) to the interpretation of the spatial variation of the electrical potential over time.

In addition, the highest empirical entropy (H = 1.390) was obtained with the ICA algorithm, which interpreted the amount of randomness and revealed a lack of predictable temporal patterns in the EEG microstate. The relative lower empirical entropy (H = 1.360) was found with K-means algorithm, presenting better outcomes for interpreting EEG data into EEG microstate labels. The PCA algorithm had the lowest empirical entropy (H = 1320), demonstrating the best results among the three algorithms.

Spectral power analysis

Figure 7 displays brain topographic maps reflecting changes in EEG spectral power experienced by participants conducting virtual object translation tasks in the experiment. The frontal alpha power mainly rose from 1DoF to 3DoF translation mode, whereas fluctuations in beta and gamma power were observed across different brain regions.

Fig. 7. EEG spectral power analysis: the brain topographic maps of alpha, beta, and gamma power in (i) Session 1: mouse-based desktop; (ii) Session 2: touch-based tablet; and (iii) Session 3: handheld AR (particularly, the color bars represent the amount of frequency band power).

In Session 1 (the mouse-based desktop session), as shown in Figure 7i, significant increases in frontal alpha power were found at the F7 electrode (z = −3.360, P < 0.001) when participants switched from 1DoF to 3DoF translation mode. A similar trend in alpha power was detected at other electrodes in the frontal, central, and parietal areas without significant difference. Significant increases in frontal beta power were also observed at the F7 electrode (z = −3.061, P = 0.002) from 1DoF to 3DoF translation mode, whereas fluctuations in beta power were detected at other electrodes. There were notable changes in gamma power across various brain regions, but no discernible pattern.

In Session 2 (the touch-based tablet session), as shown in Figure 7ii, alpha power was significantly increased in the frontal and central areas (F4: z = −2.240, P = 0.025; FC5: z = −2.427, P = 0.015; C2: z = −2.165, P = 0.030) when participants switched from 1DoF to 3DoF translation mode. An upward trend in alpha power was also observed at several electrodes in most brain regions. Besides, beta power decreased in the frontal-central area (FCz: z = 2.203, P = 0.027; FC2: z = 2.576, P = 0.010) and increased in the central area (C1: z = −2.389, P = 0.016; C2: z = −3.061, P = 0.002) from 1DoF to 3DoF translation mode. The EEG spectral power analysis revealed an ambiguous trend in gamma power across different brain regions when comparing 1DoF with 3DoF translation mode.

In Session 3 (the handheld AR session), as shown in Figure 7iii, alpha power increased in the frontal (F3: z = −3.397, P < 0.001), central (C5: z = −2.053, P = 0.040; Cz: z = −2.053, P = 0.040), and parietal (CP5: z = −2.240, P = 0.025; CP3: z = −2.688, P = 0.007) areas from 1DoF to 3DoF translation mode. An overall fluctuating trend in the beta power were observed in the frontal and central areas (F7: z = −1.979, P = 0.048; FC1: z = −2.725, P = 0.006; C1: z = 2.800, P = 0.005; C2: z = −1.979, P = 0.048; CP3: z = 2.800, P = 0.005) when participants switched from 1DoF to 3DoF translation mode. Similarly, gamma power significantly increased and decreased in specific brain regions with an uncertain trend between two virtual object translation modes.

As shown in Figure 7, participants perceived the highest alpha power when employing the handheld AR interface among all three 2D interfaces. According to EEG spectral power analysis, changes in alpha power at four electrodes, namely F4, F7, FC2, and FC5 electrodes, were apparently detected across different 2D interfaces, as displayed in Figure 8. Specifically, participants experienced higher frontal alpha power in handheld AR settings than in the other settings, particularly at F4 and F7 electrodes, whereas they perceived the equivalent level of frontal alpha power in both mouse- and touch-based settings. In addition, changes in alpha power occurred significantly in the frontal area (F4 and F7 electrodes) than in the frontal-central area (FC2 and FC5 electrodes) during the experiment.

Fig. 8. Examples of alpha power analysis at four electrodes (F4, F7, FC2, and FC5) in Session 1: mouse-based desktop, Session 2: touch-based tablet, and Session 3: handheld AR.

Phase coherence analysis

Figure 9 displays brain connection maps that are directly represented by the EEG phase coherence experienced by participants during the experiment. In general, the alpha and beta coherences between specific brain regions, particularly between the frontal and other brain regions, were observed to be associated with translation modes, whereas an uncertain trend in gamma coherence was detected across various brain regions.

Fig. 9. EEG phase coherence analysis: the brain connectivity maps of the alpha, beta, and gamma coherences in (i) Session 1: mouse-based desktop; (ii) Session 2: touch-based tablet; and (iii) Session 3: handheld AR (specifically, the red line indicates phase coherence increases and the blue line indicates phase coherence decreases from 1DoF to 3DoF mode) with significate differences.

In Session 1 (the mouse-based desktop session), as demonstrated in Figure 9i, alpha coherences at F3-FC6 (z = −2.016, P = 0.044) and FC6-Cz (z = −2.128, P = 0.033) increased significantly from 1DoF to 3DoF translation mode. The downward trend in beta coherences could be primarily observed at FCz-P4 (z = 2.253, P = 0.040) and F4-Pz (z = 2.202, P = 0.028) from 1DoF to 3DoF mode. Fluctuations in gamma coherences were observed in several brain regions, but no apparent pattern between the two virtual object translation modes.

In Session 2 (the touch-based tablet session), as shown in Figure 9ii, significant increases in alpha coherences were identified at F3-C4 (z = −2.240, P = 0.025), FC6-Cz (z = −2.091, P = 0.037), and F4-Pz (z = −2.539, P = 0.011), when participants switched from 1DoF to 3DoF translation mode. Significant declines in beta coherences were found at F3-F4 (z = 2.240, P = 0.025), F8-CP1 (z = 2.091, P = 0.037), F8-CP5 (z = 2.240, P = 0.025), and F4-Pz (z = 2.016, P = 0.044) from 1DoF to 3DoF translation mode. Unlike other sessions, there was no discernible pattern in gamma coherences across different brain regions.

In Session 3 (the handheld AR session), Figure 9iii demonstrates that alpha coherences increased at F3-FC6 (z = −2.240, P = 0.025), F3-Pz (z = −2.389, P = 0.017), F4-Cz (z = −2.016, P = 0.044), F7-P4 (z = −2.091, P = 0.037), and F7-P8 (z = −2.651, P = 0.008) when participants switched from 1DoF to 3DoF translation mode. The downward trend in beta coherences was identified at F3-Pz (z = −2.651, P = 0.008), F4-CP3 (z = −2.352, P = 0.019), and FC2-CP1 (z = −1.979, P = 0.048), whereas the upward trend in beta coherences was detected at F4-P7 (z = 2.016, P = 0.044) from 1DoF to 3DoF translation mode. In addition, gamma coherences varied significantly across two virtual object translation modes without a discernible pattern.

Throughout all three sessions, as shown in Figure 9, participants’ brain connectivity became more complicated when they conducted translation tasks on the handheld AR interface, while participants achieved the equivalent level of brain connectivity on the mouse- and touch-based interfaces. Specifically, participants experienced more alpha coherences between certain brain regions due to more complicated brain connectivity when conducting translation tasks with two translation modes on the handheld AR interface than on other 2D interfaces.

Discussion

The findings of this study were supported by integrated evidence through participants’ self-report, task performance, and EEG data. According to these measures, the relationship between virtual object translation modes (1DoF and 3DoF) and users’ feeling of control as well as the association across 2D interface types (mouse-based, touch-based, and handheld AR) and users’ feeling of control was discussed as follows. In addition, some important design considerations regarding translation mode and 2D interface are proposed based on these findings at the end of this section.

Self-report

Data from the four-item AQ, which is extensively adopted in the existing literature for SoA assessment in a subjective manner (Roth and Latoschik, Reference Roth and Latoschik2019), revealed that the 1DoF translation mode was closely related to an increased feeling of control along with higher SoA scores than the 3DoF translation mode. Some participants with prior CAD experience, who were used to employing the 1DoF translation mode for virtual object movement, preferred this mode in their work with a higher feeling of control. Besides, participants without prior experience encountered the challenge of moving a 3D object on the 2D screen, which may have resulted in higher SoA scores in the 1DoF translation mode due to its feature for separate operations on each axis. From another perspective, participants, regardless of prior experience, found the 3DoF mode to be rather difficult to translate a virtual object on the 2D screen because of the limitations of the 2D screen, as shown by their lower SoA scores.

Among three 2D interface types, the handheld AR interface was related to decreased feeling of control owing to lower SoA scores when compared with the other two interfaces (mouse- and touch-based interfaces). Participants were allowed to walk around in the physical environment and observe virtual objects on the handheld AR interface, whereas they were required to remain seated for all experimental tasks on the other two interface types (mouse- and touch-based interfaces). Some participants found the handheld AR interface interesting, but it required more effort to move around and the handheld device trembled while walking, which may contribute to a lower feeling of control. In addition, participants with previous CAD experience favored the mouse-based interface according to their habits, whereas those without prior CAD experience hardly perceived any differences between the mouse- and touch-based interfaces. In general, participants gave the mouse-based interface slightly higher SoA scores than the touch-based interface.

Task performance

Previous studies demonstrated that users who voluntarily plan and execute a sequence of actions prefer those with higher SoA, leading to greater task performance (Wen et al., Reference Wen, Yamashita and Asama2017; Jeunet et al., Reference Jeunet, Albert, Argelaguet and Lécuyer2018). In this research, participants translated virtual objects in the 1DoF mode faster than in the 3DoF mode. In other words, participants moved virtual objects more efficiently in the 1DoF mode than in the 3DoF mode. Specifically, the 1DoF translation mode allowed participants to move a virtual object along one single axis on 2D screens, which made them concentrate on clear targets consuming less manipulation time along with a greater feeling of control. In contrast, although the 3DoF translation mode offered the ability to simultaneously move virtual objects along three axes, it may be more difficult for participants to adapt to this mode, resulting in a lower feeling of control and a longer time in this experiment.

Participants spent much more time moving virtual objects on the handheld AR interface than on the other two interface types (mouse- and touch-based interfaces). The handheld AR interface permitted participants to observe virtual objects and physical surroundings on the 2D screen and walk around in the physical space, which made them spend much more time associated with a lower feeling of control during this experiment. Both the mouse and touch-based interfaces yielded comparable results based on efficiency analysis. Participants often applied the mouse- and touch-based interfaces in their everyday lives and were more familiar with them, so these two interface types were comparatively time-efficient with a higher feeling of control.

As CAD is a widely applied tool in the design process, researchers have found that task efficiency reflects the quality of designers’ work and influences the generation of creative ideas (Huang, Reference Huang2005; Alkemade et al., Reference Alkemade, Verbeek and Lukosch2017). Prior literature investigated the impact of interactive modes on the efficiency and quality of designers’ work in CAD (Huang, Reference Huang2005). Researchers attempted to improve the efficiency and quality of designers by integrating interactive content and control into CAD (Kochhar, Reference Kochhar1994; Huang, Reference Huang2005; Hu et al., Reference Hu, Ding, Zhang and Yan2008). Specifically, CAD-related operations, such as virtual object translation, are essential for design inspiration and innovation. In this study, the 1DoF translation mode provided more SoA to participants, as well as a higher level of task efficiency when compared with the 3DoF mode. In other words, the 1DoF mode may provide designers with a preferable controlling experience in CAD, possibly enabling them to simulate creative ideas. We also found that the handheld AR interface showed a lower feeling of control in translation tasks with longer manipulation time than the other two interface types, which may negatively influence user experience in CAD and not be beneficial for creative idea generation.

EEG data

According to the EEG microstate analysis, microstate labels were extracted by three clustering algorithms (K-means, PCA, and ICA) to highlight alpha power changes in specific brain regions related to SoA. Prior literature demonstrated that EEG microstate maps directly show EEG labels representing the entire EEG dataset (Khanna et al., Reference Khanna, Pascual-Leone, Michel and Farzan2015; Von Wegner et al., Reference Von Wegner, Knaut and Laufs2018). In this study, we found positive and negative alpha power peaks were found in the frontal and parietal brain regions. In other words, these two regions became more active when participants moved the virtual objects on different 2D interface types. In the current literature, the frontal lobe activation is associated with motor control which influences SoA (Alvarez and Emory, Reference Alvarez and Emory2006; Klimesch, Reference Klimesch2012), and the frontal brain region is associated with perceived SoA (Kang et al., Reference Kang, Im, Shim, Nahab, Park, Kim and Hallett2015; Kuttikat et al., Reference Kuttikat, Noreika, Shenker, Chennu, Bekinschtein and Brown2016). Our findings through the microstate analysis supported that EEG labels as principal features were extracted to determine that alpha power was active in the frontal region highly associated with SoA.

Based on the EEG spectral power analysis, this study revealed that the frontal alpha power was considerably higher in the 3DoF translation mode than in the 1DoF translation mode on different 2D interface types. In addition, the handheld AR interface contributed to the highest frontal alpha power, whereas the mouse-based interface resulted in the lowest frontal alpha power among the three 2D interface types in the experiment. Prior studies demonstrated that frontal alpha power is adversely associated with SoA changes (Kang et al., Reference Kang, Im, Shim, Nahab, Park, Kim and Hallett2015; Kuttikat et al., Reference Kuttikat, Noreika, Shenker, Chennu, Bekinschtein and Brown2016). Hence, this study presented that participants perceived higher SoA in the 1DoF translation mode than in the 3DoF translation mode, as well as on the handheld AR interface than on the other interface types, due to significantly increased frontal lobe activation in the frontal brain region. Differently, the findings of this study revealed that the beta and gamma powers are slightly associated with SoA, which may support previous literature presenting no apparent association between beta and gamma power and SoA (Schneider et al., Reference Schneider, Eiband, Ullrich and Butz2018).

The EEG phase coherence in this study revealed that the alpha coherences between the frontal and other brain regions were higher in the 3DoF translation mode than in the 1DoF translation mode on different 2D interface types. As there is evidence that decreased brain activation is related to increased SoA (Wen et al., Reference Wen, Yamashita and Asama2017; Wen et al., Reference Wen, Kuroki and Asama2019), participants in this study experienced higher SoA in the 1DoF translation mode than in the 3DoF translation mode due to less brain activation. Similarly, the handheld AR interface was associated with much more alpha coherences between the frontal and other brain regions when compared with the other two interface types. In addition, this research revealed a downward trend in beta coherence between the frontal and other brain regions from 1DoF to 3DoF translation mode, which were consistent with the alpha-blocking response, stating that the alpha waves were generated to inhibit the generation of beta waves (Schneider et al., Reference Schneider, Eiband, Ullrich and Butz2018). As a result of gamma coherence analysis, this study was likely to confirm the uncertain association between the gamma band and SoA in cognitive activities (Kang et al., Reference Kang, Im, Shim, Nahab, Park, Kim and Hallett2015).

Previous literature demonstrated that the EEG alpha band is widely adopted to investigate the quality of design concepts and plays a significant role in creative thinking and design innovation (Cao et al., Reference Cao, Zhao and Guo2021). Particularly, previous literature displayed higher activation of the alpha band during the problem-solving stage (Vieira et al., Reference Vieira, Benedek, Gero, Li and Cascini2022), and the alpha band is activated for both elementary-level and higher-level design activities (Li et al., Reference Li, Becattini and Cascini2021). It was found in this study that the alpha band was highly related to users’ SoA in CAD; for example, the frontal alpha band was higher activation in the 3DoF mode than in the 1DoF mode, while the mouse-based interface resulted in the lowest frontal alpha band activation among the three 2D interface types. Due to the fact that CAD is often used as a tool to encourage design inspiration and innovation (Veisz et al., Reference Veisz, Namouz, Joshi and Summers2012; Wang et al., Reference Wang, Bai, Billinghurst, Zhang, Wei, Xu and Zhang2021), the feeling of control may influence users’ ability to generate creative ideas in CAD. In addition, there are connections across various psychological aspects of design cognition. Particularly, it is impossible to perceive SoA without experiencing of cognitive load, revealing that cognitive load may negatively impact SoA (Howard et al., Reference Howard, Edwards and Bayliss2016). Related research also investigated the differences in the cognitive load when users conduct 3D modeling in creative idea generation, with results in the suppression of parietal and occipital alpha power on 2D screens (Vieira et al., Reference Vieira, Benedek, Gero, Li and Cascini2022). Hence, it is likely to discuss the importance of SoA for the design domain in future research.

Summary

Participants experienced more SoA in the 1DoF translation mode than in the 3DoF translation mode, as evidenced by self-report, task performance, and EEG data. As stated previously, 2D interfaces encountered the challenge of moving a 3D object on a 2D screen. The 1DoF translation mode provided the capacity to control a virtual object in a single direction, breaking down the three dimensions of the simulated environment so that participants may experience a higher feeling of control with more purpose and anticipation of their operations. The 3DoF translation mode enabled simultaneous movement in three dimensions based on interactive techniques, but participants were unable to predict their actions with the lower feeling of control due to the 2D screen limitations, despite the real-time conversion of 2D points to 3D coordinates. Furthermore, prior literature presented that task performance reflects the quality of designers’ work, and scholars attempted to improve work quality by improving task efficiency (Alkemade et al., Reference Alkemade, Verbeek and Lukosch2017). It is apparent that task performance and creative idea generation are positively related to the design process. Moreover, this research showed a positive association between task performance and SoA, which is highly related to the EEG alpha band. Hence, this paper proposed that the 1DoF translation mode may help designers employ CAD more effectively while enhancing the user experience in the process of generating design ideas.

As supported by self-report, task performance, and EEG data, participants perceived much lower SoA while performing translation tasks on the handheld AR interface than on the other 2D interface types, with the mouse-based interface leading to slightly higher SoA than the touch-based interface. Despite the fact that all of these interfaces were based on 2D screens, the handheld AR interface appeared to be distinct from the other two interface types, possibly as a result of the combination of actual and virtual spaces. The handheld AR interface made it more challenging to manipulate a virtual object on a 2D screen that integrated with the physical environment. In addition, participants who were used to traditional 2D interface types, such as mouse- and touch-based interfaces, were unfamiliar with the handheld AR interface with a reduced feeling of control. However, the handheld AR interface broadened interaction possibilities due to its own properties; thus, designers should pay more attention to how to increase the feeling of control on this interface. From another perspective, scholars have attempted to improve the quality of designers’ work by improving task efficiency in the design process (Alkemade et al., Reference Alkemade, Verbeek and Lukosch2017). This research revealed that the mouse-based interface led to higher task efficiency and less alpha band activation as compared with other interface types. Consequently, this study suggested that the mouse-based interface may enhance the user experience allowing designers to promote more creative ideas.

The use of CAD has always been popular among designers and is essential for design innovation and inspiration. In the design process, CAD is used to simulate and refine creative ideas; meanwhile, the user experience of CAD contributes to the designers’ ability to promote creative solutions and think creatively (Veisz et al., Reference Veisz, Namouz, Joshi and Summers2012). In addition, CAD operations, such as virtual object translation, have been investigated as potential mechanisms to enhance users’ task performance and psychological aspects, which may influence their ability to simulate creative ideas (Alkemade et al., Reference Alkemade, Verbeek and Lukosch2017). In this study, the 1DoF translation mode, as well as the mouse-based interface, brought a better feeling of control as compared with other conditions, allowing users to have a preferable controlling experience. As SoA is a critical index of user experience (Wen et al., Reference Wen, Kuroki and Asama2019; Caspar et al., Reference Caspar, De Beir, Lauwers, Cleeremans and Vanderborght2021), it is critical for designers to feel that they have control over the use of CAD-related operations in order to produce high-quality work and come up with creative solutions. Moreover, there are connections across various psychological aspects of design cognition. Prior literature indicated that cognitive load shows its potential to evaluate designers’ efficiency and quality in the design process through EEG signals (Liu et al., Reference Liu, Li, Xiong, Cao and Yuan2018; Jia and Zeng, Reference Jia and Zeng2021). Scholars also proposed a negative association between cognitive load and SoA (Howard et al., Reference Howard, Edwards and Bayliss2016). Hence, we suggested that it may contribute to improving the quality of their work when designers have a preferable controlling experience in their work.

There are some future works that can be considered based on this study. Exploring the influence of SoA on the design process is still in its early stages, despite the fact that we have evaluated the relationship between SoA and translation modes, as well as 2D interface types in CAD. On the one hand, based on the findings of this study, task performance has an impact on users’ feeling of control, and scholars have attempted to enhance work quality by improving task efficiency in CAD as a way to show better creative idea generation in the design process (Veisz et al., Reference Veisz, Namouz, Joshi and Summers2012). It is also proposed that introducing cognitive artificial intelligence to the system can allow the system itself to analyze and make decisions like users (Zhao et al., Reference Zhao, Li and Xu2022), which can bring cognition capability to the system for the enhanced user experience. On the other hand, we examined changes in SoA with different translation modes and 2D interface types as measured by EEG data; additionally, EEG has been shown to provide insights into designers’ creative ideas (Liu et al., Reference Liu, Li, Xiong, Cao and Yuan2018; Jia and Zeng, Reference Jia and Zeng2021). In spite of the fact that the EEG alpha band is frequently adopted to explore the quality of design concepts, and the alpha band also plays a significant role in SoA evaluation, more evidence is necessary to confirm the association between SoA and innovative solutions in the design process through EEG measures.

Design considerations

Scholars revealed that enhancing SoA is one of the most common principles in design guidelines (Lukoff et al., Reference Lukoff, Lyngs, Zade, Liao, Choi, Fan and Hiniker2021), since feeling in control of one's actions is a basic human need defined by self-determination theory (Brewer and Kameswaran, Reference Brewer and Kameswaran2018). For instance, one of the principles of Shneiderman and Plaisant's Eight Golden Rules of Interface Design is that “users require the experience that they are in control of an interactive interface that responds to their actions”. Prior literature suggested researchers investigate more supporting evidence for SoA consideration in design (Coyle et al., Reference Coyle, Moore, Kristensson, Fletcher and Blackwell2012; Bergstrom-Lehtovirta et al., Reference Bergstrom-Lehtovirta, Coyle, Knibbe and Hornbæk2018). According to the findings in this study, several design considerations could be concluded for two virtual object translation modes (1DoF and 3DoF modes) and three 2D interface types (mouse-based, touch-based, and handheld AR interfaces), which may be leveraged to implement future applications.

1DoF is a preferable mode to translate virtual object in CAD compared with 3DoF, if designers care about a greater feeling of control on 2D screens. Based on subjective and objective measures, this study revealed that participants experienced higher SoA in 1DoF translation mode than in 3DoF translation mode on 2D screens, thus 1DoF translation mode may be a preferable option on these three 2D interface types. When considering SoA in the CAD scenario, 1DoF translation mode is a key mode for designers who need to improve their feeling of control and work performance. Besides, the 3DoF translation mode seems to be a more natural interaction to control all three dimensions on 2D screens, but it is unable to better reflect physical actions into the simulated environment owing to the constraints of moving the 3D object on the 2D screen. Although the 3DoF translation mode offers a strategy to move virtual objects along the x-, y-, and z-axes concurrently, this mode still enables users to experience reduced SoA throughout this experiment. Given the prior design guidelines emphasizing the feeling of control, the 1DoF translation mode is more suitable for enhancing SoA, while the 3DoF translation mode should be selected with more consideration. The 1DoF translation mode is an important element of CAD which is adopted in the design process; meanwhile, it brings a higher controlling experience when designers translate virtual objects in CAD, which may assist them to have a higher level of design quality and creativity, when compared with 3DoF translation mode. In addition, the combination of these two translation modes can also be considered in the design process, which has been implemented in traditional CAD software.

The mouse-based interface is highly recommended when designers prefer an increased controlling experience, whereas the touch-based interface should be considered for applications which require more flexible interactions with hands. According to subjective and objective analysis, this research found that the mouse-based interface contributed to slightly higher SoA in translation tasks when compared with the touch-based interface, possibly indicating that the traditional mouse was preferable for virtual object translation tasks in simulated settings. CAD tools, for example, may be more suited to being implemented on the mouse-based interface, which is associated with enhanced work efficiency and an improved feeling of control over designers’ operations. That may also be the reason why most designers are used to using CAD tools on the mouse-based interface. It is obvious that CAD is commonly employed by designers in the design process, and the mouse-based interface is associated with better work performance and a feeling of control, which may contribute to more creative ideas, when compared with the touch-based interface. On the other hand, existing research also suggested mobile CAD systems that can be transferred from desktops to touch-based devices to make CAD tools more accessible (Lupinetti et al., Reference Lupinetti, Cabiddu, Giannini and Monti2019). Finger gestures on the touch-based interface, allowing single-contact, multi-contact, and two-hand interactions on the screen, are more flexible for broadening the possibilities of virtual object translation modes for mobile CAD systems or other entertainment applications, including object translation operations (Moldovan et al., Reference Moldovan, Nicula, Pasca, Popa, Namburu, Oros and Brie2020). In this study, one-finger gestures were used to translate virtual objects on a tablet, and other more natural interactions, such as two-hand interaction, may be highly recommended to enhance the feeling of control experienced by users. Even though the touch-based interface brings flexible interactions for the design process, it may not be appropriate for design tasks owing to designers’ habits or some hardware constraints, but it can be combined with other interface types for creative idea generation in CAD by connecting with different interface types.

The handheld AR interface presents its benefits when taking into consideration the interaction with the physical environment, hence broadening the design scope in future work. This paper highly suggested that the AR interface can be distinguished from the mouse- and touch-based interfaces. Although this study revealed that the handheld AR interface contributes to the lowest SoA in virtual object translation tasks among the three 2D interface types, related techniques still provide more chances for user interactions due to their own unique handheld AR properties. For instance, the usage of CAD tools on handheld AR interfaces, which augment digital information over the view of the real world, allows designers to work in physical surroundings (Kim et al., Reference Kim, Park and Ko2018); meanwhile, commercial CAD systems, such as AR-CAD, are currently applied in industry. Furthermore, due to handheld AR properties, users can observe the real world through the handheld device, which may increase users’ sense of control if not only interacting with virtual objects but also interacting with physical objects. For instance, prior literature suggested that virtual and physical object manipulation based on a handheld AR system can support the recovery of upper limb motor function in patients with stroke, considering SoA as an important metric in motor rehabilitation (Ying and Aimin, Reference Ying and Aimin2017). The handheld AR interface presents its benefits by providing interactions with the physical environment, hence broadening the scope of future work. From another perspective, when considering design creativity and innovation, handheld AR features may be more conducive to collaborative CAD work. Designers can observe others’ operations in real-time in the same virtual and real surroundings; meanwhile, they can interact and collaborate efficiently with each other.

Conclusion

Virtual object translation is an essential feature in CAD scenarios based on 2D screens, and CAD plays an important role in the design process. However, little was known in the existing literature about SoA, emphasizing the feeling of control, in two virtual object translation modes (1DoF and 3DoF) and three 2D interface types (mouse-based, touch-based, and handheld AR). Given the significance of SoA, this research compared participants’ feeling of control across two virtual object translation modes as well as three 2D interface types using self-report, task performance, and EEG data. This research revealed that 1DoF mode was probably associated with higher SoA in translation tasks on various 2D interface types when compared with 3DoF mode, and the handheld AR interface with reduced SoA appeared differently from the other two 2D interfaces when participants applied both translation modes. The findings provided researchers with novel perspectives on the importance of SoA consideration. The contribution of this study was to explore, for the first time, the potential relationship between translation modes and SoA, as well as the possible association between 2D interfaces and SoA. The results of three clustering algorithms identify that the EEG alpha power changes were active in the frontal brain region, which was highly associated with SoA; meanwhile, this study provided more technical evidence to the limited literature by assessing SoA through spectral power and phase coherence. In addition, based on the theoretical findings, we concluded several design considerations which may allow designers to perceive a preferable feeling of control in CAD and help them to simulate creative ideas during the design process.

Financial support

This work is supported by Key Program Special Fund in XJTLU (KSF-E-34), Research Development Fund of XJTLU (RDF-18-02-30), and the Natural Science Foundation of the Jiangsu Higher Education Institutions of China (20KJB520034).

Conflict of interest

None.

Data availability statement

Owning to the nature of this study, participants did not provide permission for their data to be published publicly, hence supporting data are unavailable.

Wenxin Sun received his M.Sc. degree in computer science from the Chinese University of Hong Kong in 2015. He is currently pursuing his Ph.D. at the University of Liverpool. His research interest includes human-computer interaction and user experience in mixed reality.

Mengjie Huang is an Assistant Professor in the Design School at Xi'an Jiaotong-Liverpool University, China. She received the B.Eng. degree from Sichuan University and the Ph.D. degree from National University of Singapore. Her research interests lie in human-computer interaction design, with special focuses on human factors, user experience, and brain decoding. Her recent research projects relate to virtual/augmented reality and brain-computer interface.

Chenxin Wu is currently working toward his B.Eng. in digital media technology at Xi'an Jiaotong-Liverpool University. His research interest includes human-computer interaction and user experience in mixed reality.

Rui Yang is an Associate Professor in the School of Advanced Technology at Xi'an Jiaotong-Liverpool University, China. He received the B.Eng. degree in Computer Engineering and the Ph.D. degree in Electrical and Computer Engineering from National University of Singapore. His research interests include machine learning-based data analysis and applications. He is an active reviewer for international journals and conferences and is currently serving as an Associate Editor for Neurocomputing.

Ji Han is a Senior Lecturer (Associate Professor) in Design and Innovation at the Department of Innovation, Technology and Entrepreneurship at the University of Exeter. His research addresses various topics relating to design and creativity and places a strong emphasis on exploring new design approaches and developing advanced design support tools. His general interests include design creativity, data-driven design, AI in design, and virtual reality.

Yong Yue (BEng Northeastern China, PhD Heriot-Watt UK, CEng, FIET, FIMechE, FHEA) is a Professor at the Department of Computing, and Director of the Virtual Engineering Centre (VEC) at Xi'an Jiaotong-Liverpool University, China. He was Head of the Department of Computer Science and Software Engineering (2013-2019). Prior to joining XJTLU, he had held various positions in industry and academia in China and the UK, including Engineer, Project Manager, Professor, Director of Research and Head of Department. His current research interests include virtual reality, computer vision, robot applications, and operations research. He has led a variety of research and professional projects supported by major funding bodies and industries. He has over 220 peer-reviewed publications and supervised 26 PhD students to successful completion.

References

Abraham, A (2016) Gender and creativity: an overview of psychological and neuroscientific literature. Brain Imaging and Behavior 2, 609618.CrossRefGoogle Scholar
Acharya, S and Chakrabarti, A (2020) A conceptual tool for environmentally benign design: development and evaluation of a “proof of concept”. Artificial Intelligence for Engineering Design Analysis and Manufacturing 34, 3044.CrossRefGoogle Scholar
Alibali, MW (2005) Gesture in spatial cognition: expressing, communicating, and thinking about spatial information. Spatial Cognition and Computation 5, 307331.CrossRefGoogle Scholar
Alicja, K and Maciej, S (2022) Can AI see bias in X-ray images? International Journal of Network Dynamics and Intelligence 1, 4864.Google Scholar
Alkemade, R, Verbeek, FJ and Lukosch, SG (2017) On the efficiency of a VR hand gesture-based interface for 3D object manipulations in conceptual design. International Journal of Human–Computer Interaction 33, 882901.CrossRefGoogle Scholar
Alvarez, JA and Emory, E (2006) Executive function and the frontal lobes: a meta-analytic review. Neuropsychology Review 16, 1742.CrossRefGoogle ScholarPubMed
Aris, SAM, Jalil, SZA, Bani, NA, Kaidi, HM and Muhtazaruddin, MN (2018) Statistical feature analysis of EEG alpha asymmetry between relaxed and non-relaxed. In 2018 2nd International Conference on BioSignal Analysis, Processing and Systems (ICBAPS). IEEE, pp. 171–175.CrossRefGoogle Scholar
Atilola, O and Linsey, J (2015) Representing analogies to influence fixation and creativity: a study comparing computer-aided design, photographs, and sketches. AI EDAM 29, 161171.Google Scholar
Bai, H, Lee, GA and Billinghurst, M (2012) Freeze view touch and finger gesture-based interaction methods for handheld augmented reality interfaces. In Proceedings of the 27th Conference on Image and Vision Computing New Zealand, pp. 126–131.CrossRefGoogle Scholar
Benedek, M (2018) Internally directed attention in creative cognition. In Jung, RE and Vartanian, O (eds), The Cambridge Handbook of the Neuroscience of Creativity. Cambridge, UK: Cambridge University Press, pp. 180194.CrossRefGoogle Scholar
Bergstrom-Lehtovirta, J, Coyle, D, Knibbe, J and Hornbæk, K (2018) I really did that: sense of agency with touchpad, keyboard, and on-skin interaction. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, pp. 1–8.CrossRefGoogle Scholar
Besançon, L, Ynnerman, A, Keefe, DF, Yu, L and Isenberg, T (2021) The state of the art of spatial interfaces for 3D visualization. Computer Graphics Forum 40, 293326.CrossRefGoogle Scholar
Bonnici, A, Akman, A, Calleja, G, Camilleri, KP, Fehling, P, Ferreira, A and Rosin, PL (2019) Sketch-based interaction and modeling: where do we stand? AI EDAM 33, 370388.Google Scholar
Brewer, RN and Kameswaran, V (2018) Understanding the power of control in autonomous vehicles for people with vision impairment. In Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility, pp. 185–197.CrossRefGoogle Scholar
Camba, J, Contero, M and Johnson, M (2014) Management of visual clutter in annotated 3D CAD models: a comparative study. In Marcus, A, Møllenbach, E, Abascal, J and Sturdee, M (eds), International Conference of Design, User Experience, and Usability. Cham: Springer, pp. 405416.Google Scholar
Cao, J, Zhao, W and Guo, X (2021) Utilizing EEG to explore design fixation during creative idea generation. Computational Intelligence and Neuroscience 2021, 110.Google ScholarPubMed
Caspar, EA, De Beir, A, Lauwers, G, Cleeremans, A and Vanderborght, B (2021) How using brain-machine interfaces influences the human sense of agency. PLoS One 16, 124.CrossRefGoogle ScholarPubMed
Chang, HJ, Huang, K and Wu, C (2006) Determination of sample size in using central limit theorem for Weibull distribution. International Journal of Information and Management Sciences 17, 31.Google Scholar
Chang, HJ, Wu, CH, Ho, JF and Chen, PY (2008) On sample size in using central limit theorem for gamma distribution. Information and Management Sciences 19, 153174.Google Scholar
Coyle, D, Moore, J, Kristensson, PO, Fletcher, P and Blackwell, A (2012) I did that! Measuring users’ experience of agency in their own actions. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 2025–2034.CrossRefGoogle Scholar
Cun, W, Mo, R, Chu, J, Yu, S, Zhang, H, Fan, H and Chen, C (2021) Sitting posture detection and recognition of aircraft passengers using machine learning. Artificial Intelligence for Engineering Design, Analysis and Manufacturing 35, 284294.CrossRefGoogle Scholar
Dong, Z, Piumsomboon, T, Zhang, J, Clark, A, Bai, H and Lindeman, R (2020) A comparison of surface and motion user-defined gestures for mobile augmented reality. In Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems, pp. 1–8.CrossRefGoogle Scholar
Drucker, SM, Fisher, D, Sadana, R, Herron, J and Schraefel, MC (2013) Touchviz: a case study comparing two interfaces for data analytics on tablets. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 2301–2310.CrossRefGoogle Scholar
Fillingim, KB, Shapiro, H, Reichling, CJ and Fu, K (2021) Effect of physical activity through virtual reality on design creativity. Artificial Intelligence for Engineering Design, Analysis and Manufacturing 35, 99115.CrossRefGoogle Scholar
Gallagher, S (2000) Philosophical conceptions of the self: implications for cognitive science. Trends in Cognitive Sciences 4, 1421.CrossRefGoogle ScholarPubMed
Goh, ES, Sunar, MS and Ismail, AW (2019) 3D object manipulation techniques in handheld mobile augmented reality interface: a review. IEEE Access 7, 4058140601.CrossRefGoogle Scholar
Gorantla, VR, Tedesco, S, Chandanathil, M, Maity, S, Bond, V, Lewis, C and Millis, RM (2020) Associations of alpha and beta interhemispheric EEG coherences with indices of attentional control and academic performance. Behavioral Neurology 2020, 17.CrossRefGoogle ScholarPubMed
Guerino, GC and Valentim, NMC (2020) Usability and user experience evaluation of natural user interfaces: a systematic mapping study. IET Software 14, 451467.CrossRefGoogle Scholar
Haggard, P (2005) Conscious intention and motor cognition. Trends in Cognitive Sciences 9, 290295.CrossRefGoogle ScholarPubMed
Hancock, M, Ten Cate, T and Carpendale, S (2009) Sticky tools: full 6DoF force-based interaction for multi-touch tables. In Proceedings of the ACM International Conference on Interactive Tabletops and Surfaces, pp. 133–140.CrossRefGoogle Scholar
Hong, TCK and Economou, A (2022) What shape grammars do that CAD should: the 14 cases of shape embedding. AI EDAM 36, e4, 120.Google Scholar
Horvat, N, Martinec, T, Lukačević, F, Perišić, MM and Škec, S (2022) The potential of immersive virtual reality for representations in design education. Virtual Reality 26, 118.CrossRefGoogle Scholar
Howard, EE, Edwards, SG and Bayliss, AP (2016) Physical and mental effort disrupts the implicit sense of agency. Cognition 157, 114125.CrossRefGoogle ScholarPubMed
Hu, ZH, Ding, YS, Zhang, WB and Yan, Q (2008) An interactive co-evolutionary CAD system for garment pattern design. Computer-Aided Design 40, 10941104.CrossRefGoogle Scholar
Huang, G (2005) Introducing virtual engineering technology into interactive design process with high-fidelity models. In Proceedings of the Winter Simulation Conference, 2005. IEEE, pp. 10-pp.Google Scholar
Jeunet, C, Albert, L, Argelaguet, F and Lécuyer, A (2018) “Do you feel in control?”: towards novel approaches to characterize, manipulate and measure the sense of agency in virtual environments. IEEE Transactions on Visualization and Computer Graphics 24, 14861495.CrossRefGoogle ScholarPubMed
Jia, W and Zeng, Y (2021) EEG signals respond differently to idea generation, idea evolution and evaluation in a loosely controlled creativity experiment. Scientific Reports 11, 120.CrossRefGoogle Scholar
Kang, SY, Im, CH, Shim, M, Nahab, FB, Park, J, Kim, DW and Hallett, M (2015) Brain networks responsible for sense of agency: an EEG study. PLoS One 16, 124.Google Scholar
Khan, S and Tunçer, B (2019) Speech analysis for conceptual CAD modeling using multi-modal interfaces: an investigation into architects’ and engineers’ speech preferences. AI EDAM 33, 275288.Google Scholar
Khanna, A, Pascual-Leone, A, Michel, CM and Farzan, F (2015) Microstates in resting-state EEG: current status and future directions. Neuroscience & Biobehavioral Reviews 49, 105113.CrossRefGoogle ScholarPubMed
Kim, M and Han, J (2019) Effects of switchable DOF for mid-air manipulation in immersive virtual environments. International Journal of Human–Computer Interaction 35, 11471159.CrossRefGoogle Scholar
Kim, D, Park, J and Ko, KH (2018) Development of an AR based method for augmentation of 3D CAD data onto a real ship block image. Computer-Aided Design 98, 111.CrossRefGoogle Scholar
Klimesch, W (2012) Alpha-band oscillations, attention, and controlled access to stored information. Trends in Cognitive Sciences 16, 606617.CrossRefGoogle ScholarPubMed
Knoedel, S and Hachet, M (2011) Multi-touch RST in 2D and 3D spaces: studying the impact of directness on user performance. In 2011 IEEE Symposium on 3D User Interfaces (3DUI), pp. 75–78.CrossRefGoogle Scholar
Kochhar, S (1994) CCAD: a paradigm for human-computer cooperation in design. IEEE Computer Graphics and Applications 14, 5465.CrossRefGoogle Scholar
Kuttikat, A, Noreika, V, Shenker, N, Chennu, S, Bekinschtein, T and Brown, CA (2016) Neurocognitive and neuroplastic mechanisms of novel clinical signs in CRPS. Frontiers in Human Neuroscience 10, 16.CrossRefGoogle ScholarPubMed
Kwon, E, Huang, F and Goucher-Lambert, K (2022) Enabling multi-modal search for inspirational design stimuli using deep learning. Artificial Intelligence for Engineering Design, Analysis and Manufacturing 36, e22, 118.CrossRefGoogle Scholar
Lee, GA, Yang, U, Kim, Y, Jo, D, Kim, KH, Kim, JH and Choi, JS (2009) Freeze-set-go interaction method for handheld mobile augmented reality environments. In Proceedings of the 16th ACM Symposium on Virtual Reality Software and Technology, pp. 143–146.CrossRefGoogle Scholar
Li, S, Becattini, N and Cascini, G (2021) Correlating design performance to EEG activation: early evidence from experimental data. Proceedings of the Design Society 1, 771780.CrossRefGoogle Scholar
Liu, L, Li, Y, Xiong, Y, Cao, J and Yuan, P (2018) An EEG study of the relationship between design problem statements and cognitive behaviors during conceptual design. Ai Edam 32, 351362.Google Scholar
Louis, T, Troccaz, J, Rochet-Capellan, A, Hoyek, N and Bérard, F (2020) When high fidelity matters: AR and VR improve the learning of a 3D Object. In Proceedings of the International Conference on Advanced Visual Interfaces, pp. 1–9.CrossRefGoogle Scholar
Lukoff, K, Lyngs, U, Zade, H, Liao, JV, Choi, J, Fan, K and Hiniker, A (2021) How the design of Youtube influences user sense of agency. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, pp. 1–17.CrossRefGoogle Scholar
Lupinetti, K, Cabiddu, D, Giannini, F and Monti, M (2019) CAD3A: a web-based application to visualize and semantically enhance CAD assembly models. In 2019 15th International Conference on Signal-Image Technology & Internet-Based Systems (SITIS). IEEE, pp. 462–469.CrossRefGoogle Scholar
Mathewson, KE, Lleras, A, Beck, DM, Fabiani, M, Ro, T and Gratton, G (2011) Pulsed out of awareness: EEG alpha oscillations represent a pulsed-inhibition of ongoing cortical processing. Frontiers in Psychology 2, 99.CrossRefGoogle ScholarPubMed
Moldovan, A, Nicula, V, Pasca, I, Popa, M, Namburu, JK, Oros, A and Brie, P (2020) A user interface description language for runtime omni-channel user interfaces. Proceedings of the ACM on Human-Computer Interaction 4, 152.CrossRefGoogle Scholar
Nanjappan, V, Shi, R, Liang, HN, Xiao, H, Lau, KKT and Hasan, K (2019) Design of interactions for handheld augmented reality devices using wearable smart textiles: findings from a user elicitation study. Applied Sciences 9, 3177.CrossRefGoogle Scholar
Nataraj, R and Sanford, S (2021) Control modification of grasp force covaries agency and performance on rigid and compliant surfaces. Frontiers in Bioengineering and Biotechnology 8, 574006.CrossRefGoogle ScholarPubMed
Pejic, J and Pejic, P (2022) Linear kitchen layout design via machine learning. Artificial Intelligence for Engineering Design, Analysis and Manufacturing 36, e9, 112.CrossRefGoogle Scholar
Reisman, JL, Davidson, PL and Han, JY (2009) A screen-space formulation for 2D and 3D direct manipulation. In Proceedings of the 22nd annual ACM Symposium on User Interface Software and Technology, pp. 69–78.CrossRefGoogle Scholar
Rekimoto, J (2014) A new you: from augmented reality to augmented human. In Proceedings of the Ninth ACM International Conference on Interactive Tabletops and Surfaces, pp. 1–2.CrossRefGoogle Scholar
Rogers, K, J, C, Frommel, J, Stamm, S and Weber, M (2019) Exploring interaction fidelity in virtual reality: object manipulation and whole-body movements. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, pp. 1–14.CrossRefGoogle Scholar
Roth, D and Latoschik, ME (2019) Construction of a validated virtual embodiment questionnaire. Arxiv Preprint Arxiv, 1911.10176.Google Scholar
Schneider, H, Eiband, M, Ullrich, D and Butz, A (2018) Empowerment in HCI-A survey and framework. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, pp. 1–14.CrossRefGoogle Scholar
Seghezzi, S, Zirone, E, Paulesu, E and Zapparoli, L (2019) The brain in (willed) action: a meta-analytical comparison of imaging studies on motor intentionality and sense of agency. Frontiers in Psychology 10, 804.CrossRefGoogle Scholar
Su, GE, Sunar, MS and Ismail, AW (2020) Device-based manipulation technique with separated control structures for 3D object translation and rotation in handheld Mobile AR. International Journal of Human-Computer Studies 141, 102433.CrossRefGoogle Scholar
Sun, W, Huang, M, Yang, R, Zhang, J, Wang, L, Han, J and Yue, Y (2020) Workload, presence and task performance of virtual object manipulation on WebVR. In 2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR), pp. 358–361.CrossRefGoogle Scholar
Sun, W, Huang, M, Yang, R, Han, J and Yue, Y (2021) Mental workload evaluation of virtual object manipulation on WebVR: an EEG study. In 2021 IEEE International Conference on Human System Interaction (HSI), pp. 358–361.CrossRefGoogle Scholar
Sun, W, Huang, M, Wu, C and Yang, R (2022 a) Exploring virtual object translation in head-mounted augmented reality for upper limb motor rehabilitation with motor performance and eye movement characteristics. In Adjunct Proceedings of the 35th Annual ACM Symposium on User Interface Software and Technology, pp. 1–3.CrossRefGoogle Scholar
Sun, W, Huang, M, Wu, C and Yang, R (2022 b) Sense of agency on handheld AR for virtual object translation. In 2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW), pp. 838–839.CrossRefGoogle Scholar
Sun, L, Zhang, Y, Li, Z, Zhou, Z and Zhou, Z (2022 c) inML kit: empowering the prototyping of ML-enhanced products by involving designers in the ML lifecycle. Artificial Intelligence for Engineering Design, Analysis and Manufacturing 36, e8, 1–20.CrossRefGoogle Scholar
Tuddenham, P, Kirk, D and Izadi, S (2010) Graspables revisited: multi-touch vs. tangible input for tabletop displays in acquisition and manipulation tasks. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 2223–2232.Google Scholar
Veisz, D, Namouz, EZ, Joshi, S and Summers, JD (2012) Computer-aided design versus sketching: an exploratory case study. AI EDAM 26, 317335.Google Scholar
Vieira, S, Benedek, M, Gero, J, Li, S and Cascini, G (2022) Design spaces: neurophysiological activations in constrained and open design tasks.Google Scholar
Von Wegner, F, Knaut, P and Laufs, H (2018) EEG microstate sequences from different clustering algorithms are information-theoretically invariant. Frontiers in Computational Neuroscience 12, 70.CrossRefGoogle ScholarPubMed
Wan, Z, Yang, R, Huang, M, Zeng, N and Liu, X (2021) A review on transfer learning in EEG signal analysis. Neurocomputing 421, 114.CrossRefGoogle Scholar
Wang, Y, MacKenzie, CL, Summers, VA and Booth, KS (1998) The structure of object transportation and orientation in human-computer interaction. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 312–319.CrossRefGoogle Scholar
Wang, P, Bai, X, Billinghurst, M, Zhang, S, Wei, S, Xu, G and Zhang, J (2021) 3DGAM: using 3D gesture and CAD models for training on mixed reality remote collaboration. Multimedia Tools and Applications 80, 3105931084.CrossRefGoogle Scholar
Wang, L, Huang, M, Yang, R, Liang, HN, Han, J and Sun, Y (2022) Survey of movement reproduction in immersive virtual rehabilitation. IEEE Transactions on Visualization and Computer Graphics.Google ScholarPubMed
Watson, D, Hancock, M, Mandryk, RL and Birk, M (2013) Deconstructing the touch experience. In Proceedings of the 2013 ACM International Conference on Interactive Tabletops and Surfaces, pp. 199–208.CrossRefGoogle Scholar
Wen, W, Yamashita, A and Asama, H (2017) Measurement of the perception of control during continuous movement using electroencephalography. Frontiers in Human Neuroscience 11, 392.CrossRefGoogle ScholarPubMed
Wen, W, Kuroki, Y and Asama, H (2019) The sense of agency in driving automation. Frontiers in Psychology 10, 2691.CrossRefGoogle ScholarPubMed
Wodehouse, A, Loudon, B and Urquhart, L (2020) The configuration and experience mapping of an accessible VR environment for effective design reviews. AI EDAM 34, 387400.Google Scholar
Ying, W and Aimin, W (2017) Augmented reality based upper limb rehabilitation system. In 2017 13th IEEE International Conference on Electronic Measurement & Instruments (ICEMI), pp. 426–430.CrossRefGoogle Scholar
Yu, L, Svetachov, P, Isenberg, P, Everts, MH and Isenberg, T (2010) FI3D: direct-touch interaction for the exploration of 3D scientific visualization spaces. IEEE Transactions on Visualization and Computer Graphics 16, 16131622.Google ScholarPubMed
Yu, N, Yang, R and Huang, M (2022) Deep common spatial pattern based motor imagery classification with improved objective function. International Journal of Network Dynamics and Intelligence 1, 7384.CrossRefGoogle Scholar
Zhao, G, Li, Y and Xu, Q (2022) From emotion AI to cognitive AI. International Journal of Network Dynamics and Intelligence 1, 6572.CrossRefGoogle Scholar
Zhou, F, Duh, HBL and Billinghurst, M (2008) Trends in augmented reality tracking, interaction and display: a review of ten years of ISMAR. In 2008 7th IEEE/ACM International Symposium on Mixed and Augmented Reality, pp. 193–202.CrossRefGoogle Scholar
Zito, GA, Wiest, R and Aybek, S (2020) Neural correlates of sense of agency in motor control: a neuroimaging meta-analysis. PLoS One 15, e0234321, 117.CrossRefGoogle ScholarPubMed
Figure 0

Fig. 1. Virtual object translation modes: (i) the 1DoF mode with three arrows, and (ii) the 3DoF mode with one white dot.

Figure 1

Fig. 2. The experimental settings based on (i) Session 1: mouse-based desktop, (ii) Session 2: touch-based tablet, and (iii) Session 3: handheld AR.

Figure 2

Fig. 3. The initial configuration of each translation task, including (a) one moveable object and (b) one target object, on mouse-based, touch-based, and handheld AR interfaces.

Figure 3

Table 1. The four-item agency questionnaire (AQ)

Figure 4

Fig. 4. Channel locations of the EEG head cap.

Figure 5

Fig. 5. The detailed process of EEG microstate analysis.

Figure 6

Table 2. Mean and standard deviation of SoA scores in two translation modes on different 2D interfaces

Figure 7

Table 3. Mean and standard deviation of manipulation time in two translation modes on different 2D interfaces

Figure 8

Fig. 6. EEG microstate maps obtained by different clustering algorithms (K-means, PCA, and ICA algorithms).

Figure 9

Fig. 7. EEG spectral power analysis: the brain topographic maps of alpha, beta, and gamma power in (i) Session 1: mouse-based desktop; (ii) Session 2: touch-based tablet; and (iii) Session 3: handheld AR (particularly, the color bars represent the amount of frequency band power).

Figure 10

Fig. 8. Examples of alpha power analysis at four electrodes (F4, F7, FC2, and FC5) in Session 1: mouse-based desktop, Session 2: touch-based tablet, and Session 3: handheld AR.

Figure 11

Fig. 9. EEG phase coherence analysis: the brain connectivity maps of the alpha, beta, and gamma coherences in (i) Session 1: mouse-based desktop; (ii) Session 2: touch-based tablet; and (iii) Session 3: handheld AR (specifically, the red line indicates phase coherence increases and the blue line indicates phase coherence decreases from 1DoF to 3DoF mode) with significate differences.