Among all the types of digital data regularly collected by archaeologists, the largest and most complex type-group usually consists of three-dimensional (3D) models. A typical photogrammetric capture of a trench or site can easily contain tens of millions of points that connect to form a triangle mesh wrapped in a photorealistic texture. Models of spaces and objects captured by photogrammetry, lidar, or structured light are also detailed and highly precise. This raises the question, What do we do with all these 3D models containing such abundant data? Spatial and morphometric analyses are naturally central to archaeological research, but software for undertaking this type of work in 3D remains challenging to use. For humans, moving from the interface of a 2D screen to a more realistic 3D environment could support more intuitive interaction with the data (Figure 1). Although the closed environment of virtual reality (VR) has its purposes, given that archaeology inherently deals with the real world and objects that exist around us in the present, the ability to mix the virtual and the real may hold more potential for future applications—including in the classroom, during field excavation training, and for heritage or data interpretation (Figure 2). Here, we review these types of mixed worlds, enabled by augmented reality (AR) and mixed reality (MR) technologies. Although we may be years or even over a decade away from hardware that allows truly seamless integration of the real and virtual, we should now consider where we are and where we are headed.
DEFINITIONS AND HARDWARE
Over the last decade, many people have had an opportunity to try the new generation of virtual reality devices. VR completely closes off a view of the real world from the user, allowing for full immersion in the virtual environment through sight and sound (Burdea and Coiffet Reference Burdea and Coiffet2003). One can imagine several useful implementations of VR in archaeology, such as for public education (Ellenberger Reference Ellenberger2017). Given that archaeologists build interpretations from real objects and spaces, however, technologies that enable the mixing of real evidence with interpretation may better fit our research. Augmented reality (AR) and mixed reality (MR) allow for the visual placement of virtual computer graphics within the real world, with which the user can interact in real time (Azuma Reference Azuma1997). The relative weight of reality and virtuality in the scene depends on the program design (Speicher et al. Reference Speicher, Hall and Nebeling2019). There remains some ambiguity about the specific differences between AR and MR, as well as their relative placement on an extended reality (XR) spectrum (Milgram et al. Reference Milgram, Takemura, Utsumi, Kishino and Das1995). For our purposes, we define AR as the simpler implementations that add information or graphics to an environment, whereas MR allows for more sophisticated, spatially aware, and immersive interactions between the virtual and the real.
The difference between AR and MR may be more meaningfully reflected in the various hardware implementations. Many of the latest smartphones and tablets, for example, enable basic AR through software that can place virtual objects into a live view from the camera (Figures 3 and 4). This functionality can also be used to place information virtually into an environment—including text and video “pop-up” introductions to objects and spaces at an archaeological site. Given that most people carry a smartphone with satellite positioning capabilities, AR offers a low barrier to entry for deployment at any site (Jayawardena and Perera Reference Jayawardena and Perera2016). Because of its greater accessibility, AR has also seen much more research focus from archaeologists than MR. MR, on the other hand, works with hardware that places transparent digital screens directly in the view of both eyes by using a head-mounted device (HMD), thereby enabling a stereoscopic 3D view of the virtual world superimposed on the real world (Figure 5). Together with directional audio, this enables a multisensory and fully immersive MR environment, where virtual objects are placed seamlessly into the real world from the perspective of the user. MR HMDs have only come to market in the last few years, and they still face several limitations—such as a small field of view—given the enormous processing power required to both create real-time virtual objects and map the spatial layout of the local environment. The Microsoft HoloLens 2 (Figure 1) is currently the most widely available device, but various other vendors have developed this technology or are planning to enter the market in the next few years (Aniwaa Pte. Ltd 2021; Fathi Reference Fathi2021). These types of HMD should be distinguished from less sophisticated smartglasses that place a small phone-like screen in the peripheral view to convey information.
THE RANGE OF APPLICATIONS FOR AR AND MR IN ARCHAEOLOGY
Over the last decade, interest has been building in the deployment of AR and MR for studying the past. For example, Bekele and colleagues (Reference Bekele, Pierdicca, Frontoni, Malinverni and Gain2018) surveyed applications in the broader cultural heritage field and found that AR was most often used to enhance public exhibits, but it was also used for the reconstruction of everything from paintings to statues. Within archaeology, research has progressed in a variety of directions regarding the application of AR to our field. In a prior review, Ellenberger (Reference Ellenberger2017) highlighted how AR can be deployed in public engagement and education. Keil and colleagues (Reference Keil, Pujol, Roussou, Engelke, Schmitt, Bockholt, Eleftheratou, Addison, De Luca, Guidi and Pescarin2013) experimented with AR in the Athens Acropolis Museum, which included recoloring the actual ancient objects, and Amakawa and Westin (Reference Amakawa and Westin2018) emphasized that the virtual can help compensate for telling the past of people who may be materially underrepresented or had more intangible heritage. Bruno and colleagues (Reference Bruno, Barbieri, Mangeruga, Cozza, Lagudi, Čejka, Liarokapis and Skarlatos2019) even experimented with applying AR to underwater sites in order to enhance the diving-tourist's experience. Liestøl and colleagues (Reference Liestøl, Bjørkli, Ledas, Stenarson, Uleberg, Matsumoto and Uleberg2018) used tablets to allow users to see how the sea level along the coast of Norway had changed over time while those users were standing in the actual landscape. Their app also linked to an online database providing multimodal information about archaeological sites of the Neolithic period. Dragoni and colleagues (Reference Dragoni, Quattrini, Sernani, Ruggeri and Luigini2018) experimented with tablet-based AR technologies to determine performance parameters for overlaying real-scale Roman architecture on the in situ remains. Many other researchers have also experimented with using AR to guide tourists through archaeological sites and present locationally aware information superimposed on sites or objects in a landscape or museum (e.g., Morandi and Tremari Reference Morandi, Tremari, Goodman and Addison2017; Pierdicca et al. Reference Pierdicca, Frontoni, Zingaretti, Malinverni, Galli, Marcheggiani, Costa, De Paolis and Mongelli2016).
Experiments have also begun with public engagement using MR, even though it may not yet be comfortable for visitors to wear an HMD such as the HoloLens for extended periods of time. Bekele's (Reference Bekele2019) “walkable MxR Map” provided a room-sized map that was projected on the floor and viewed through the HoloLens. Users could be guided through a museum or heritage site to interact with cultural content using HoloLens methods such as gesture, gaze, and voice activation. Hammady and colleagues (Reference Hammady, Ma, Strathern and Mohamad2020) also deployed the HoloLens to increase the immersive experience of visitors to the famous Egyptian Museum in Cairo, enabling interaction with virtual guides at display cases where users could also manipulate virtual objects. This is only a small sampling of the examples of using AR and MR in public engagement—the most developed subtopic of AR and MR in archaeology—which has a long history of scholarly inquiry (Papagiannakis et al. Reference Papagiannakis, Schertenleib, O'Kennedy, Arevalo-Poizat, Magnenat-Thalmann, Stoddart and Thalmann2005; Vlahakis et al. Reference Vlahakis, Karigiannis, Tsotros, Gounaris, Almeida, Stricker, Gleue, Christou, Carlucci and Ioannidis2001).
Studies of other applications for AR and MR, however, hold out the much wider potential of our deployment of large 3D datasets for archaeology—including for data management and research applications. Dilena and Soressi (Reference Dilena and Soressi2020) demonstrated the potential and current limitations of large datasets of precisely positioned excavated artifacts. Although there are challenges due to memory and sensor constraints, they proposed using phone-based AR to place artifacts in the correct position at the real site, hovering above the ground where they were originally excavated—which we agree is one of the most powerful potential uses of AR and, eventually, full immersive MR. Barbier and colleagues (Reference Barbier, Kenny, Young, Normand, Keane, O'Sullivan, Ventresque, Goodman and Addison2017) developed an annotation system to allow archaeologists to examine megalithic cave artwork remotely from the office or classroom. This project deployed the Microsoft HoloLens 1 to enable interaction with the stone surfaces, including the ability to write notes attached to specific locations on those surfaces. Given that the system works with the artifacts outside the original context and only engages with individual artworks, however, a VR setup would have been sufficient for this project because MR-specific functionality is not really engaged. Although visual interaction with sites is well developed for visitors, some researchers are turning their attention to using AR with other senses. Sikora and colleagues (Reference Sikora, Russo, Đerek and Jurčević2018) used headphones and a smartphone to give visitors a sense of what a medieval site would have sounded like. Eve (Reference Eve2017) enabled multisensory interaction with archaeological landscapes by adding sound and smell to the visual AR experience.
Brondi and colleagues (Reference Brondi, Carrozzino, Lorenzini and Tecchia2016) attempted to use the HoloLens's gesture capabilities to enhance hands-on training in cultural heritage, particularly intangible culture. When using the device, a user follows the prerecorded hand movements of printmakers creating stamps or of weavers at a loom. The user's hands could be superimposed on the prerecorded hands to enhance the training by replicating each precise movement. This technique might also have application potential in training archaeological excavation methods or conservation treatments. An extremely promising example of the deployment of MR comes from Gaugne and colleagues (Reference Gaugne, Petit, Otsuki, Gouranton, Nicolas, Kakehi and Hiyama2019). The authors propose the novel application of using MR to guide a micro-excavation by allowing a user to “see inside” the excavation target—in this case, funerary urns. They used CT scan data to produce 3D visualizations of the solid objects buried within the dirt of an urn. With an MR projection of these 3D models, it would appear to the user that the objects are floating inside. Consequently, while excavating the actual urn, the user would see the precise location, size, and orientation of any object that was captured by the CT scan within the urn. This could greatly help to guide excavators’ decisions as well as prepare them for when they begin to uncover each object. The authors note that training with MR could usefully reduce potential excavation damage, such as in situations where objects overlap or are in a very complex position. It would also allow for working time estimation.
SOME CHALLENGES OF 3D DATA
The recent proliferation of 3D datasets has encouraged new discussions on the ethical deployment of these models. One well-known case involved the Roman period triumphal arch in Palmyra, Syria, where digital methods were both celebrated for their preservation potential (Denker Reference Denker2017) and critiqued for their tendency toward cultural appropriation (Kamash Reference Kamash2017). Recently, the Koç University Maritime Archaeology Research Center (KUDAR) and the American Research Institute in Turkey (ARIT) held an online discussion entitled “National Jurisdiction in the Digital Realm” (ARIT 2021). This conversation examined the dissemination of 3D digital models of cultural heritage that often represent national identities. The presenters considered the roles and relationships between nation states and individuals who can now build high-fidelity 3D models, and they explored questions surrounding the control and ownership of those models. In addition, individual users can modify digital representations by adding, changing, and deleting 3D components of digital heritage based on personal preferences—how would this impact the dissemination of information about original cultural heritage?
Another consideration is who has access to the expensive technologies used to create accurate digital models, and who has the training to deploy these methods successfully (Kansa and Kansa Reference Kansa and Kansa2021; Roosevelt et al. Reference Roosevelt, Cobb, Moss, Olson and Ünlüsoy2015:341). Because these technologies are new, specialists have only just begun to share information on how best to deploy them on archaeological projects. For example, Rahaman and colleagues (Reference Rahaman, Champion and Bekele2019) and Douglass and colleagues (Reference Douglass, Day, Brunette, Bleed and Scott2019) offer comprehensive proposals for workflows to create 3D models from 2D imagery specifically for the purpose of cultural heritage deployment in MR. Close collaborations between archaeologists and digital specialists can also support our digital practice (Cobb et al. Reference Cobb, Sigmier, Creamer and French2019).
Although 3D models can help to preserve information about the past and objects from the past, the creation of these large 3D datasets also often leads to the question of how to preserve the digital data themselves over the long term. We create digital heritage by conducting careful, time-consuming measurements to capture the highly accurate 3D data we need for research. Because 3D data hold similar importance to other types of digital or physical data, they should be preserved and made accessible (Kansa Reference Kansa2012). Nowadays, cloud storage systems simplify preservation by abstracting the physical storage mechanism and by enabling easy data replication, often geographically distributed (Mering Reference Mering2015). Several international online archaeological repositories plan to store large datasets for an extended period of time (Galeazzi et al. Reference Galeazzi, Callieri, Dellepiane, Charno, Richards and Scopigno2016; McManamon Reference McManamon, Kintigh, Ellison and Brin2017).
LOOKING FORWARD
Recent world events have highlighted the potential benefits that XR technologies can have when opening up remote interaction with archaeological sites and objects, especially for education and training. The global XR market is estimated to grow 18% each year until 2028 (Nigam Reference Nigam2021), and virtual museums (Romano Reference Romano2020) and virtual site tours (Dziuba Reference Dziuba2021) have quickly developed worldwide. Conferences (including those of the Society for American Archaeology) and other meetings have experimented with virtuality, and schools and universities are now experienced with providing both face-to-face and virtual courses. The increasing application of machine learning in archaeology should also help enable us to better use and interact with our large and complex 3D datasets (Bickler Reference Bickler2021). Given their ability to combine the real world of our present archaeological evidence with the virtual worlds of our interpretations of the past, AR and MR technologies are particularly well positioned to make contributions to our archaeological practice and research. Improved hardware will make the MR and AR experiences easier to use, more realistic, and more comfortable. And then, perhaps one day, archaeologists will naturally record new evidence and interact with previously removed evidence while using MR in situ.
Acknowledgments
I would like to thank Mr. Wong Chi-him Leo of the Lending Services and Learning Environment division of the University of Hong Kong Libraries, who kindly set up the HoloLens 2 for student experimentation and photography.