Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article
Open access

3VR: Vice Versa Virtual Reality Algorithm to Track and Map User Experience

Published: 24 June 2024 Publication History
  • Get Citation Alerts
  • Abstract

    The understanding of how users interact with the virtual cultural heritage could provide digital curators valuable insights into user behaviors and also improve the overall user experience through the ability to observe and record interactions of virtual visitors. This article introduces the new User Behavior (UB) tracking algorithm that we developed investigating a salience of the Virtual Reality (VR) panoramic regions. The algorithm extracts the importance of Region of Interest (ROI) determining patterns of the visitors’ virtual movement and interest in combination with statistics of captured browser activity. The input of our algorithm is the virtual online interactive platform (Virtual Museum of the Civic Art Gallery of Ascoli Piceno in Italy) with 81 16,386 × 8,192 pixels panoramic images and several interactive features including maps, thumbnails, and menus. The software engine of the tracking model “Vice Versa” VR operates on inverse functions of all descriptive functions (descriptors), which are assigned particularly to each interactive feature such as viewing multimedia content and observing the panoramic environment. The tracking experiment was performed online and the web virtual museum key study collected behavior information from 171 visitors around the world. Collected data, multimedia and textual content, and the coordinates of the ROIs are then subjected to standard statistics operations to define common patterns of UBs. Thus, we have discovered that the ROIs are mostly mapped onto the artworks and it is possible to obtain patterns about the main interests of users. The developed tool offers a guideline for the panoramic tours design and the potential benefits for museums are to understand the public, verify the effectiveness of choices, and re-shape a cultural offer based on visitors’ needs. Exploiting this kind of user experience, our algorithm ensures relevant feedback during virtual visits and thus paves the way for further development of the recommender system.

    1 Introduction

    This digital era governs our lives in all segments including the needs for cultural upliftment and progress in this sphere of life. The protection and preservation of cultural heritage are not excluded from this general trend. New generations are becoming digital and virtual, which should not be ignored by cultural and research institutions. Moreover, all structures of society have to adapt their operations and improve the ways of promotion and presentation to effectively reach their digital consumers.
    The increase in the level and comprehensiveness of digital transformation of societies around the world is evident in the last decade [1]. This is the consequence mostly of the improvements in fundamental technologies, hardware/software development, and an increase in the Internet speed. Unfortunately, the COVID-19 pandemic has significantly affected our entire life, including cultural enlightenment and the social life melioration. According to the United Nations Educational, Scientific and Cultural Organization (UNESCO) document [2], the COVID-19 crisis resulted in the closure of \(90\%\) of museums whereas \(10\%\) of museums may never reopen. Opposing this, we are constantly looking for new ways to overcome all negative influences and by using the advantages of technological progress, we are trying to create a new environment in which we will continue our progress [3].
    The museum sector responded rapidly in developing virtual experiences for their visitors [4]. Institutions are actively involving digital platforms and many of them opted to try out new digital software, previously untapped platforms, and channels [5, 6]. Chronologically important milestones toward cultural heritage digitization include the launching of Europeana prototype in 2008 [7], the establishment of Digital Agenda for Europe in 2010 [8], and as a part of it, the publishing of “The New Renaissance” by the Comité des Sages in 2011 on bringing Europe’s cultural heritage online [9] and the “Commission Recommendation on the digitization and online accessibility of cultural material and digital preservation” in 2011 [10], the establishment of the new scenarios during the European Year of Cultural Heritage in 2018 [11], “Digital4Culture” strategy defined by the New European Agenda for Culture in 2018 [11], and in the same year, Leeuwarden Declaration on adaptive re-use of the built heritage [12].
    The digital trend continues even nowadays and virtual visits to museums are permanently rising since virtual tours proved to be effective, entertaining, and educational [13]. Google Arts & Culture platform [14] is an example of providing easy online access to digital cultural heritage and virtual museums. Museum directors and curators are more aware of the importance of having content online to overcome a dramatic loss of visitors. Thus, the importance of collecting and analyzing data from the user activities is needed. In response to the necessity of such information, many investigations have been performed including User Behavior (UB) tracking both in real and virtual environments.
    Some relevant research that encompasses electronic tracking in the physical environment includes the usage of RGB and depth cameras to capture visitors and deep learning algorithms for data extraction [15]; sensors for facial recognition with a convolutional neural network for defining the interests of visitors [16]; WiFi-based head counting framework for defining the type of visitors [17]; Bluetooth proximity detection for the analysis of visitors’ behavior in the Louvre museum [18]; Bluetooth Low-Energy beacons for the UB assessment and the interaction with the artworks in the Ateneo Art Gallery in the Philippines [19]; and Signal sensors for social behavior patterns of visitor pairs [20]. Despite the above-mentioned trends shedding light on the importance of understanding the dynamics that occur in a real environment, the same cannot be said for virtual ones. Indeed, to our knowledge, there are no well-established methodologies that can enable developers and insiders to infer statistics and/or behavioral patterns within the virtual compartments” of a museum exhibition.
    Our research focuses on human behavior tracking solely in the virtual panoramic environment and strives to determine the most important region in the museum observed by an individual. The input of our algorithm is the virtual online interactive platform (Virtual Museum of the Civic Art Gallery of Ascoli Piceno in Italy) [21] with 81 16,386 \(\times\) 8,192 pixels panoramic images and several interactive features including maps, thumbnails, and menus. The tracking model operates as a reverse virtualization process that assigns particular descriptive functions (descriptors) to each interactive feature, such as viewing multimedia content and observing the panoramic environment. The tracking experiment was performed online, and the web virtual museum key study attracted the attention of 171 visitors around the world. The tracking output is then categorized and measured to define common patterns of UBs. Our primary research contribution includes (1) the development of the virtual web museum, and (2) the novel approach for users’ data collection through the “Vice Versa” Virtual Reality (3VR) algorithm. By collecting and analyzing such data, we proved that the correlation between the equirectangular projection of the view position and the value of the Field of View (FOV) parameter is crucial for the salient region definition, i.e., the most perceptually marked region of the observed panoramas. Our secondary research contribution is the tracking of the events that fire functions when specific buttons are activated. The contributions thus face the challenge of understanding the patterns and preferences of users as they interact within the Virtual Reality (VR) environment. This is made possible through the use of the “salient data repository” and a connected dashboard, allowing museum curators and experts to assess the effectiveness of their decisions. They can then enhance the online museum by introducing new features and diverse content to engage users and maintain their interest. In our approach, the digital web museum is seen as lively content that can be updated according to the user suggestions and inclinations.
    The article is organized as follows. In Section 2, we give an overview of the relevant work focusing on UB tracking in virtual environments. Section 3 describes the concepts, hypotheses, and challenges in our user study. Within the same section, we describe the case study of the Civic Art Gallery of Ascoli that we exploit for the interactive virtual tour development and the design of the virtual museum web platform. Our algorithm is explained in detail in Section 4 along with the data collection protocol, the description of fundamental geometrical and procedural features used for the calculations in the virtualization process, and definitions of each segment of the reverse virtualization process. We show results and analyze the properties of selected Regions of Interest (ROIs) in Section 5. The conclusion with a brief recap of our research achievements and goals for future work is given in Section 6.

    2 Related Work

    UB tracking is implemented in almost all applications and websites used on smart devices in various fields including education, economy, medicine, retail, gaming, and cultural heritage. Data collection from the users used to be performed with the conventional technique, surveys that usually come in the form of questionnaires. Lorenzo et al. assessed UB and specifically user aesthetics and emotions in the virtual 360° videos using different technologies [22]. To evaluate user satisfaction with x-commerce for fashion retail with the integrated vocal dialogues with Amazon Alexa virtual assistant, two VR-based immersive experiences are developed by the authors in [23] and subjected thoroughly to the questionnaire-based evaluation. Sarraf performed surveys to collect data on the demographic of average museum Website visitors and their perceptions [24]. Peacock suggested a framework for testing the user experience of the museum based on analysis of web log data [25]. Accessibility, functionality, attractiveness, and usefulness of the website were the main questions. Barbieri et al. discuss the feedback on learnability, device, and system efficacy through a performed user study using the trackball and touch-screen system of the virtual museum [26]. The focus was on comparing the effect of these technologies on UB and performance and defining which technology participants prefer and enjoy more than others.
    To capture users’ interaction across VR sessions in the Three-Dimensional (3D) space, an image-based encoding model was introduced in [27] whereas quantitative and qualitative measurements were performed to investigate the satisfaction across users that have tested web and mobile applications and those who were engaged in an on-place visit to the museum in [28]. To record interactions within two different scenarios, virtual and real visits, the authors performed several questionnaires and used remote analytic tools and cameras coupled with computer vision algorithms that detect human motions. The authors of [29] have proposed a method for assessing the effectiveness of 360° virtual tours using an analytical tool. Interactions of the users were measured through embedded JavaScript codes and the final results presented the quantity and types of performed actions including clicks on the button, the way users accessed different web links, visits duration, and so forth. One notable technique for analyzing UB in virtual environments is the use of eye-tracking technology when using VR headsets [30]. For example, in [31] Google Analytics is featured along with HTTP server logs and ClickStream logs to gather information on stickiness and site loyalty, social media referrals, and virtual exhibition usage of the Europeana portal. Google Analytics combined with eye-tracking for user interaction observing in the museology field is leveraged by [32].
    In line with the latest research achievements, our study follows the previously mentioned directions of research, but with a specific focus on the ROI detection during the panoramic virtual tour rather than the developed platform assessment that has been mostly performed in the related works. Besides the salient regions, this study, for the first time, aims to assess the interactions with digitized High-Definition (HD) paintings and 3D models defined as the most important content of the museum.
    Additionally, the tracking algorithm is specifically tailored to capture user interactions within the panoramic interactive environment. This differs from the more typical visual tracking approaches on the web, which rely on tracking parameters through the content displayed by the browser, often leading to outcomes associated with a Two-Dimensional (2D)-rendered image. Therefore, the homogeneous nature of the panorama ensures that, even as the viewing angle is shifted (and thus the image is rendered in the browser), the exact position of the object within the 3D space is consistently known. This uniformity across the panoramic view obviates the need to collect numerous 2D fragments of images and to locate those pinpoints across different images for behavior tracking. When a user navigates within the panorama, the algorithm can maintain tracking without having to save all possible views or recalculate the object position within a single panorama, which would require significantly more effort than monitoring a specific artifact in a homogenous environment such as a panorama.
    Unlike some comparable work in literature, we did not want to consider any personal information of the visitors (name, email addresses, date of birth, and so on) because they might not behave naturally with the feeling of being observed (tracked). Moreover, such a way ensures compliance with the General Data Protection Regulation by exposing this information neither throughout the article nor externally.

    3 Study Design

    The motivation driving the development of our algorithm is to exhibit the collections of museum institutions, particularly aiming to highlight specific pieces of art according to the needs of the public. To achieve this, a connective framework is essential to bridge the gap between two fundamentally different entities: museum experts, such as curators and historians, and visitors. The vice versa method in all its aspects facilitates the gathering of valuable information from both groups. In this collaborative approach, experts from the Civic Art Gallery initially select the pieces, which we then digitally render into 3D models. These chosen pieces are semantically enriched by museum professionals through textual descriptions, allowing for a deeper examination by visitors, whereas the remaining pieces serve as visual elements within the panoramas. Conversely, our algorithm has identified ROIs that visitors are drawn to, embodying the reciprocal purpose of our algorithm. This data, captured by tracking visitor interactions, is invaluable for museum institutions. It enables them to tailor the content of the virtual tour based on visitor interactions. Furthermore, this insight allows the institution to craft varied tours, including specialized guided experiences, by analyzing this interaction data.
    3VR aims to comprehensively analyze and accurately differentiate obtained data. Although laboratory user study gives explicit data from the trained and monitored subjects, our goal was to perform the experiment on a real-life scenario, hence we designed a comprehensive web page for the virtual museum without technically demanding requirements. An important task for such a large and complex VR museum exhibition of over 81 panoramic views is to involve a sufficient number of visitors. To extract captured interactions in the different rooms of the museum, collecting enough data from the user interaction is essential. To this end, we leveraged a variety of social media platforms to disseminate the web link of the virtual museum, aiming to attract a substantial number of visitors to our user study.
    Figure 1 illustrates the design scheme of the developed tracking system and includes common and separate output data for each panorama region and click events collecting, respectively.
    Fig. 1.
    Fig. 1. The scheme of the tracking system design for the virtual museums based on panoramic virtualization, where \(\mathrm{F}\) is the total number of panoramas \(\mathrm{P_{a}}\) .
    Panoramas are subjected to two types of tracking functions: (1) JavaScript click events and (2) angles \(\theta_{x}\) and \(\theta_{y}\) which in equirectangular space correspond to the horizontal and vertical movement, respectively. Their outputs are generated in the local spreadsheet simultaneously upon each user interaction. The common output of these two functions is locally generated data including timestamp of the interaction (date and time), unique user ID, user IP address, user’s web browser, panorama ID, and width and height of the body website element. Additionally, horizontal and vertical pixel values, \(p_{x}\) and \(p_{y}\) , and vertical FOV of the panorama are generated for the panorama region calculation, and click events data named interactive elements for the content and user interface assessment, which we explain in the Section 4.

    3.1 Civic Art Gallery—Case Study

    The Civic Art Gallery is situated in Ascoli Piceno, a small town in the Marche region of Italy. In the museum system of Ascoli, it has a prominent role, and the collection curator, Prof. Stefano Papetti, decided to focus on the digitization of its artifacts, specifically indicating the highlights of the museum. The Ascoli Museum system established a collaboration with the Università Politecnica delle Marche research group with the main goal of supporting its digital transformation. The key technologies available such as Virtual and Augmented Reality are included in the Virtual Immersion in Territorial Arts project and the project is aimed at developing effective digital experiences both during the museum visit and for the remote visit. Only some of its paintings and some statues are presented in the form of HD paintings in our virtual museum, as “curator choices,” while all the artworks are enjoyable by the user in the high-quality panoramas. The collection online communication can be in this case particularly challenging because the museum presents a huge collection made not only in statues and paintings but also in furniture and accessories.
    One of the possibilities related to the UB observation is to verify if the people’s attention is addressed to the main artifacts or not, thus implementing tools that can help the curator and experts to orient their communication/educational activities to the people’s expectations. In this light, the Civic Art Gallery has been chosen as a case study in our study because it is a common example of an exhibition gallery with a rich collection that is directly linked to the rooms it belongs to. The environment is distinguished by dense meanings and values which is challenging for both virtual and on-place visits. Therefore, the observation of UB may give rise to unexpected and non-trivial results.

    3.2 Interactive Virtual Web Museum

    According to [33] the exhibition spaces must be inclusive, welcoming, and accessible, yet promote the best visiting experience. Hence, we created a user-friendly virtual tour of the museum adapted for desktop use and available in English and Italian language. The virtual tour is composed of 81 interconnected panoramic images that the visitors are capable of exploring along with the rooms through the map, hotspots, or the thumbnail menu that shows only the main rooms. Although various commercial tools allow the creation of visually engaging and highly interactive interfaces, they may not be able to accommodate subsequent steps that fall under our method. Therefore, we developed our custom modular architecture of the system, thereby ensuring user tracking, input type definition, and data analysis according to the specific needs of our study. Additionally, we did not rely on any external software APIs, choosing instead to program directly within an integrated development environment.
    The advantage of our application is its universality, i.e., it can be easily used by professionals and all stakeholders even without experience in cultural heritage. Its tracked interactive features and relations between them are conceptually presented in Figure 2.
    Fig. 2.
    Fig. 2. The conceptual representation of the virtual tour application functionality (the blueprint). Each function is annotated with the corresponding descriptor that is described in detail in Appendix A.1.
    Our virtual museum offers visitors the possibility to take a closer look at the digital 3D models and HD paintings that we scientifically digitized using photogrammetry and laser scanning techniques. We could have exposed more than six 3D models and four high-quality paintings to the users, but it would have altered our research focus in terms of biasing users toward our choice of artwork. Instead, we are more interested in the likelihood of the users being capable of reaching them spontaneously and how that would happen. Besides the visual representation, we provide a handful of information including textual descriptions of the artifacts, the rooms, the artists, and the building.

    4 Methodology

    The accumulated dataset from different visitors allows the investigation of a multitude of questions of interest in cultural heritage and perceptive psychology. We were inspired by the approach of [34] that defines salient regions on the 3D meshes based on several geometrical properties of the selected points by a large number of users. More precisely, it asked several basic psychological questions that we adapted and leveraged to correspond to our scope of study. Although the subjects of our article (panoramic images) and those from [34] (3D models) are completely different, there is a surprisingly strong bond in the visual and psychological aspects, the salience in any visually represented matter. The questions that we opt to answer include “How consistently do people observe regions on the same panorama?” “How are observed regions distributed on the panorama?” and “What features or contents on the panorama are prominent at the observed regions?”
    This section introduces the methodology that we have applied to get answers to previously determined questions. The 3VR algorithm with a flow of all operation processes is depicted in Figure 3.
    Fig. 3.
    Fig. 3. The flowchart of the operations within our algorithm.

    4.1 Reverse VR Process Definitions

    The basic engine of the 3VR technique is our developed algorithm for the reverse virtualization process. Thus, this subsection begins with a brief analysis of the virtualization process in the panoramic tour development. In addition, all main mathematical definitions of both VR and 3VR processes are introduced and explained.
    We can observe the interactivity of a given virtual environment as a set \(\mathrm{D}=[\mathrm{D}_{1},\mathrm{D}_{2},\ldots,\mathrm{D}_{\mathrm{m}}]\) of \(\mathrm{m}\) descriptors i.e., functions \(\mathrm{D}_{i}=f_{i}(t_{1},t_{2},...,t_{n})\) , whose \(n\) arguments are obtained from the interactive input and form a vector of UB trackers \(\mathrm{T}=[t_{1},t_{2},...,t_{n}]\) , where \(n\) is the total number of input tracking values. Within the virtualization process, we need to describe all the calculations and projections of panoramas in equirectangular space and the derivation of the focal points from the user interactions. We introduce all mathematical notations and definitions that we use further in this article. Figure 4 represents the spherical equirectangular projection in 2D space. The resolution of a panorama is defined as \(\mathrm{W}\times\mathrm{H}\) pixels since it covers 360° FOV horizontally and 180° FOV vertically. The focal length is defined as \(\mathrm{r}_{f}=\mathrm{W/2}\pi\) pixels. The coordinates of the horizontal vanishing line on the panorama are set at \(\begin{bmatrix}\mathrm{-W/2,W/2}\end{bmatrix}\times\begin{bmatrix}\mathrm{-H/2, H/2}\end{bmatrix}\) .
    Fig. 4.
    Fig. 4. Geometry derivation [35]: (a) 2D representation of panorama plane where \(x\) and \(y\) are respectively horizontal and vertical axis of its equirectangular projection, (b) the point \(\mathrm{P(X,Y,Z)}\) in the ground plane \((\mathrm{Y}=0)\) with the assigned values of radius \(\mathrm{r}\) and angle \(\theta_{x}\) , and (c) slice of 3D sphere at the \(\mathrm{h}=1.7 \mathrm{m}\) from the ground plane and angles \(\theta_{y}\) and \(\theta_{y}^{{}^{\prime}}\) that build points \(\mathrm{P}\) and \(\mathrm{P}^{{}^{\prime}}\) respectively.
    For the point \(\mathrm{P}(x,y)\) on the panorama plane given in \(\mathbb{R}^{2}\) space, we define \(p_{x}\) and \(p_{y}\) as the pixel values along the horizontal width \((\mathrm{W})\) and vertical height \((\mathrm{H})\) of panorama axes, respectively. The point \(\mathrm{P}(x,y)\) is a mapping of the point \(\mathrm{P(X,Y,Z)}\) in the \(\mathbb{R}^{3}\) space, where \(\mathrm{Y}=0\) since it is positioned on the ground plane. Let \(\theta_{x}\) represent the angle in the x-direction and \(\theta_{y}\) represent the angle in the y-direction. Using proportions \(\theta_{x}\colon\theta_{x_{max}}=x\colon\mathrm{W}\text{ and }\theta_{y}\colon\theta_{y_{max}}=y\colon\mathrm{H},\) and as we know \(\mathrm{W=2H}\) , the following angles can be explicitly expressed in dependence from \(\mathrm{H}\) only:
    \(\begin{align}\theta_{x}=\frac{\pi}{\mathrm{H}}x\text{; }\theta_{y}=\frac{\pi}{\mathrm{H}}y.\end{align}\)
    (1)
    For the most common realistic perspective of the visitor, the vertical position of the camera is defined as \(\mathrm{h}=1.7 \mathrm{m}\) above the ground plane. Then, the coordinates of the point \(\mathrm{P(Y=0)}\) and its corresponding radius to the circular projection of the panorama width \(\mathrm{W}\) can be calculated as
    \(\begin{align}\mathrm{P_{X}=r\cos{\theta_{x}}\text{; }P_{Z}=r\sin{\theta_{x}}\text{; }r=h \mid\cot{\theta_{y}}}\mid.\end{align}\)
    (2)
    If the point \(\mathrm{P}^{\prime}(\mathrm{Y}\neq 0)\) , above the ground plane, is defined as an inverse projection of \(\mathrm{P}\) , its projection to the vertical axis can be denoted as \(\mathrm{P^{\prime}_{Y}}\neq 0\) . With the slight approximation of notation from Figure 4, any point \(\mathrm{P}\) can be calculated as \(\mathrm{P_{Y}=h+\ r\tan{\theta_{y}}}.\) If we assume the result of the visitor activity tracking is a captured position of the 3D point \(\mathrm{P^{\prime}(X,\ \ Y,\ \ Z)}\) in the interactive virtual space \(\mathbb{R}^{3}\) , then we can define the vector of its equirectangular projections \(\mathrm{P^{\prime}=\left[P_{X}^{\prime},\ P_{Y}^{\prime},\ P_{Z}^{\prime} \right]}.\) For the function \(f(x,y)\) , defined as a mapping the \(\mathbb{R}^{2}\) space of panorama onto the \(\mathbb{R}^{3}\) spherical space, we can introduce a function \(g(\mathrm{P}^{\prime})\) that maps the point \(\mathrm{P^{\prime}(X,\ \ Y,\ \ Z)}\) to the point \(\mathrm{P}(x,y)\) as an inverse function of \(f(x).\) Then, \(g\left(\mathrm{P}^{\prime}\right)=\ f^{-1}(x,y)\) . Since the procedure of mapping virtual space is defined through the two separate steps, we will derive explicit forms of its inverse function in the same way.
    Using all previous expressions where \(\mathrm{h}=1.7 \mathrm{m}\) is the constant value, and radius \(\mathrm{r}\) is given by the focal length of the virtualization system, we can calculate both angles for the corresponding point of interest \(\mathrm{P}\) in the spherical system by solving the following systems of equations:
    \(\begin{align}\label{S4.E3}\mathrm{\theta_{x}=\arccos\left(\frac{P_{X}}{r}\right)\text{; }\theta_{x}= \arcsin\left(\frac{P_{Z}}{r}\right)\text{; }\theta_{y}=\arctan\left(\frac{P_{ Y}-h}{r}\right)\text{; }\theta_{y}=\text{arccot}\left(\frac{r}{h}\right)\text {.}}\end{align}\)
    (3)
    Now, we denote \(\mathrm{H}\) as a result of the JavaScript function for detecting the original dimensions of the panorama image, or priory defined within the virtualization process code. Finally, solving the previous system of equations and expressions we can calculate horizontal and vertical pixel values of the point of interest \(\mathrm{P}\) :
    \(\begin{align}\label{S4.E4}p_{x}=\frac{\theta_{x}\mathrm{H}}{\pi}\text{; }p_{y}=\frac{\theta_{y}\mathrm{ H}}{\pi}.\end{align}\)
    (4)

    4.2 Data Collection Protocol

    We decided to create a UB system in web environments tailored to capture metrics that suit our specific goal of defining the museum content the visitors focused on the most. Moreover, we want to store data and have full control over the obtained values. To address these challenges, we implemented JavaScript codes into the output files and collected tracking results in real-time. Our collector tool operates on two parallel tracks: (1) it captures the coordinates from the gazed panorama regions and (2) it records the interactions with various interactive features including HD paintings, 3D models, and menus, listening to JavaScript click events.
    Both directions employ UB descriptors, functions that take parameters such as FOV, tilt, and pan, in addition to interactive buttons available for selection within the virtual tour. The parameters for these functions are shown in Figure 1. They are categorized as shared data used by both methodologies, alongside specific data collected individually for each. Thus, panorama width, panorama height, and FOV are exclusive to the process of ROI extraction. Meanwhile, interactive elements are instrumental in tracking direct user interactions.
    To obtain ROI, the initial challenge involves calculating the coordinates of the observed panorama region. From [35], we introduce three principal variables of the panorama including horizontal and vertical movement, and FOV to calculate the location of the central point on the equirectangular image (Equation (4)) and the user’s viewing region. The detailed technical description along with the figure is included in Appendix A.3.
    In addition to ROI extraction, our tool also records direct user interactions with virtual museum interactive elements. This includes actions such as opening the museum map, exploring artworks through textual panels, and selecting panoramas from a list, among others. The list of these descriptors is included in Figure 2 while their detailed description is depicted in Appendix A.1.
    Moreover, our developed collection tool is designed to address privacy issues and avoid redundant data capturing. To distinguish the visitors by some means that do not require their personal information data, cookies, or any additional actions from the users, we calculate the unique user identifier using an open-source FingerprintJS library [36]. It creates the unique code of letters and digits by querying and computing unique browser features. To minimize excessive and redundant data in ROI extraction, our collector tool processes the coordinates at 1-second intervals. Additionally, we ensure coordinates are not captured while users engage in certain actions, such as reading artwork descriptions, interacting with 3D models or HD images, or while any informational panel is active. Consequently, tracking is suspended when any of the descriptors that activate the panel is in use, when there has been no mouse activity for over 2 seconds, or when the application tab is not in focus (when the user navigates to a different tab). By filtering out this data, we not only preserve user privacy but also enhance the precision and reliability of our algorithm, resulting in more accurate ROI extraction and mapping.
    Ultimately, we transmit and log the data into two distinct tables corresponding to the two approaches mentioned. This was achieved using a custom script to perform HTTP GET requests, seamlessly pushing the data to the spreadsheet in real-time. At this point, our workflow splits into two separate paths. First, we employ MATLAB for assessing the data and identifying ROIs. Second, by organizing the data into tabular form, we offer museum experts a user-friendly way to analyze visitor statistics by defining specific variables of interest among predefined descriptors.

    5 Results and Discussion

    Our experiment included a total of 171 visitors and was conducted from 23 November 2022 at 11:28:04 Central European Time (CET) to 11 December 2022 at 20:11:10 CET, which is approximately 18 days in total. The virtual tour was accessed by visitors from 19 countries, including Austria, Belgium, Bulgaria, Canada, Croatia, France, Germany, Ireland, Italy, Malta, New Zealand, Poland, Portugal, Serbia, Spain, Sweden, Switzerland, United Arab Emirates, and the United States.
    To ensure clear and meaningful information when defining ROIs, filters were applied to eliminate from consideration the user IDs, panoramas, and short views that have been unidentified during data processing. Our analysis revealed that the most visited panoramas were located in the courtyard of a museum, where actually the virtual tour starts (Pa \({}_{1}\) , Pa \({}_{2}\) , and Pa \({}_{3}\) ) and that the viewing frequency of remaining panoramas showed a gradual decline which is evident in Figure 5. It represents the values of the panorama visits frequency over the active time in seconds.
    Fig. 5.
    Fig. 5. Active cumulative time in seconds recorded from all users over 26 panoramas (Pa \({}_{1}\) to Pa \({}_{26}\) ) chosen by the time of interaction greater than or equal to 100 seconds. A detailed explanation regarding FOV values as well as the applied threshold for obtaining ROIs are represented in Appendix.
    To highlight the flow of the curve while also conserving the space on the page, only a subset of the results is shown in the figure, i.e., the first 26 most viewed panoramas out of 81. The reason why the frequency decreases extremely after the third panorama can be interpreted as a loss of interest and/or a decrease in attention. In addition, the remaining panoramas are less accessible, resulting in more scattered distribution of views. Our analysis has confirmed that the key information and exhibits should be highlighted at the beginning of the virtual tour. Therefore, it is important to incorporate an interactive or dynamic element following the first panoramas to maintain visitor engagement. Utilizing such findings, curatorial planning for the museum interface can be enhanced or even personalized according to different groups of visitors. This, paired with traditional methods for measuring user satisfaction [37], can provide a more comprehensive understanding of visitor behavior in virtual environments.
    To investigate the consistency of the observed ROI, we extracted regions for each panorama with the obtained data from all users. For each panorama Pa \({}_{\text{F}}\) we sorted its horizontal and vertical pixel values \(p_{x}\) , \(p_{y}\) accordingly which are obtained from Equation (4), FOV by ascending values along the x-axis (panorama width \(\mathrm{W}\) ). To cluster the most densely populated points, we calculated the difference between neighboring elements of PaF along the x-axis array dimension and stored them into the vector Pa’F. The dense groups of points along with the ROIs are manually selected by intuitively choosing the groups of the smallest consecutive values within Pa’F. Figure 6 presents ROIs extracted from the visitors’ observation of panorama Pa20.
    Fig. 6.
    Fig. 6. The ROI obtained from all users for panorama Pa \({}_{\text{20}}\) . A portion of 21 other panoramas that were selected by extracted ROIs relevancy is presented in Figure 11.
    The values of the elements calculated in Pa’F differ among various panoramas, depending on the observed content, the variety of exhibits, the distribution of the exhibits within the built environment, and the initial parameters including horizontal pixel value ( \(p_{x}\) ), vertical pixel value ( \(p_{y}\) ), and vertical FOV.
    As illustrated in Figure 6, ROIs are located in the central area of the panorama which is one of the paintings that was considered essential. The figure indicates that the visitors were indeed interested in the painting indicated by the high concentration of regions and points, as well as the presence of small squares in multiple areas, suggesting a desire for closer inspection. In the process of identifying dense points on the panorama, our primary assumption was that the ROIs are those where movements and zoom are observed, and our goal was to uncover such regions in each Pa’ \({}_{\text{F}}\) . Moreover, we excluded from the further calculation the groups of unchanging \(x\) -axis coordinates because that could indicate that a region is not relevant enough.
    It is necessary to state that \(p_{x}\) , \(p_{y}\) , and vertical FOV are positioned to ensure that the initial view is directed toward the exhibits when reached through their icons from the menu. Moreover, when the interactive artwork is reached by its icon in the menu, the focus is always on the selected artwork, which is the case of the painting represented in Figure 6. Additionally, when navigating through the scene hotspots, the values of \(p_{x}\) , \(p_{y}\) , and vertical FOV remain unchanged when transitioning between panoramas, whereas navigating through the map and thumbnail hotspots results in a predefined view. Due to the described variations in initial views, additional analysis is required to enhance the validity of the ROIs obtained on the panoramas such as Pa \({}_{20}\) . The selection of other 21 panoramas that were defined as suitable to showcase the validity of our study is depicted in Figure 11. It indicates a different distribution of ROIs over panoramas and the number of clustered points in several cases.
    To assess the most frequently occurring descriptors during the visiting course, we extracted 24 functions that we defined as crucial for our investigation shown previously in Figure 2. To facilitate the examination of data, we clustered functions of the same type. For example, when the visitor selects the panorama Pa \({}_{\text{F}}\) utilizing the scene hotspot or presses the interactive HD painting “Passeggiata Amorosa,” the corresponding function “Scene hotspot—panorama Pa’ \({}_{\text{F}}\) ” or “Panel HD Painting” which is dedicated to the selected painting is fired respectively. As opposed to analyzing the frequency value over a single panorama over time that was discussed previously, we direct our attention toward calculating the number of interactions with the hotspot elements from the scene (D \({}_{9}\) ), map element (D \({}_{1}\) ), and thumbnail menu (D \({}_{17}\) ). Furthermore, interactions with the menus that contain HD paintings and 3D models are grouped and defined as “Icon HD Painting” and “Icon 3D model.” In a similar fashion, the interactions with these exhibits that trigger the appearance of the related panels are restricted to “Panel HD Painting” and “Panel 3D model,” respectively. Buttons for interactions with a 3D model, a closer look at an HD painting, exhibit description, and information about an artist are also grouped as single functions, regardless of the artwork they are associated with. In this context, Figure 7 presents the frequency of interaction with 24 descriptors whereas the exhaustive list with the corresponding JavaScript codes and descriptions is included in Appendix A.1.
    Fig. 7.
    Fig. 7. The frequency of interactions with 24 features (descriptors) that correspond to the interactive features represented in Figure 2 and explained in detail in Appendix A.1.
    It is clear that the interaction with the hotspots placed within the scene (viewport) of the panoramas is significantly more frequent than with other features. Subsequently, the next highest frequency usage is observed for the map element and thumbnail menu used for switching between panoramas. It is worth mentioning that the low frequency of descriptor (D \({}_{21}\) ) that activates the description of a specific HD painting is a result of it already being activated at the same time as when the interactive HD painting is pressed. In the same context, we omitted the descriptor assignment pertaining to the 3D models as it is the only feature displayed in the corresponding panel. The obtained chart implies the incorporation of an extra functionality when interacting with the hotspots, apart from the mere transition between panoramas. This is another strategy for highlighting the crucial content of the museum.
    Furthermore, it is easy for museum experts and curators to obtain statistics on visitor interactions based on selected variables from the tabular data. Selected results are presented in Figure 8, which illustrates statistics related to the top 10 most engaged users ranked by their number of interactions, and in Figure 9, which depicts statistics for the top 10 most engaged panoramas among the top 10 users with the highest number of interactions. Lastly, Figure 10 demonstrates the interactions of the 10 most engaged users with the 5 most engaged panoramas ranked by the highest number of interactions. These figures highlight the pragmatic potential of our method for museum institutions.
    Fig. 8.
    Fig. 8. Statistics related to the top 10 most engaged users ranked by their number of interactions.
    Fig. 9.
    Fig. 9. Statistics related to the top 10 most engaged panoramas for the top 10 users with the highest number of interactions.
    Fig. 10.
    Fig. 10. Interactions of the top 10 most engaged users with the top 5 most engaged panoramas.

    6 Conclusion

    In this article, we aimed to present a novel analytic method for UB data analysis in the virtual tour. Our algorithm achieved a satisfactory level in tracking and mapping different kinds of data obtained from user interactions. The variations in the shapes of ROIs, their concentration, and the distribution of the grouped regions across all panoramas confirmed our approach used to identify user interest. The method’s efficiency is clearly proven through the numerical and visual representations and can serve as a useful tool for developers and curators in creating a more engaging exhibit presentation.
    The contribution of this article is two-fold: (1) it presents an algorithm that extracts the importance of ROIs determining patterns of the visitor virtual movements and interest and (2) it provides statistics of captured browser activity both of which are crucial for assessing overall user interest. In conclusion, our method addresses an important question: Might non-expert users show interest in artworks that the museum did not initially deem as most important, thus not highlighting them? Our approach provides a detailed analysis of the viewing regions, along with interactions with other virtual museum features, offering experts and curators a clearer understanding of how exhibits are viewed by users. Our method goes beyond just identifying the most viewed panorama or artwork; it allows for exploration into the detailed interactions with specific parts of the artwork. For example, it can reveal what specific element of a painting is observed, for how long, and by how many users, answering deeper questions about visitor engagement patterns.
    The limitation of our study is the relatively insufficient attendance of the virtual tour, which may have impacted the precision of the calculated ROIs. Given that some descriptors received a small number of interactions, a larger sample size of visitors is needed to further validate the results, especially for a museum of the capacity of the Civic Art Gallery of Ascoli.
    Nevertheless, the final output of our algorithm provides a general overview of UB patterns in the virtual museum, as it combines data from the interactions obtained from all users across single panoramas in determining ROIs and descriptors that were frequently used. To gain more in-depth insights, the next step would be to conduct a per-user and per-panorama analysis as well as to introduce questionnaires to obtain more accurate information. In addition, the use of heat maps derived from user mouse movement within the identified ROIs would also provide valuable insights.
    Our future work would include expanding the research by extracting additional metrics such as visitor length of stay, sequence of visited panoramas and interactions with descriptors, as well as consideration of the visitor feedback data. This will enhance the understanding of UB in virtual environments. The next step in our research would be to gather a larger dataset to train a recommender system that will provide tailored and personalized virtual museum experiences for visitors.

    Acknowledgments

    The authors would like to thank Emanuele Frontoni, Eva S. Malinverni, and Paolo Clini for the scientific boosting and continuous stimulus to the present research, and they would also like to extend gratitude to Goran Stojic for his assistance in implementing the various JavaScript codes used in this study. The research is also framed by the E+ project DC Box Digital Curator Training & Tool box, 2021-1-IT02-KA220-HED-000032253.

    A Research Methods

    A.1. Interactive Elements

    In addition to the graphical Figure 2, the list of functionalities and JavaScript codes associated with the tracked descriptors are detailed in Table 1.
    Table 1.
    DJavaScript CodeDescription
    D1me._button_it.onclick=function(e)Button English language
    D2me._button-_eng.onclick=function(e)Button Italian language
    D3me._button_building.onclick=function(e)Button Building
    D4me._button_hdpainting.onclick=function(e)Button HD Paintings
    D5me._button_3dmodels.onclick=functionButton 3D models
    D6me._button_reachus.onmousedown=function(e)Button Reach us
    D7me._button_help.onclick=function(e)Button Help
    D8me._thumbnail_show_button.onclick=function(e)Button Thumbnail menu
    D9me._ht_node.onclick=function(e)Scene hotspot—panorama PaF
    D10me._button_open_map.onclick=function(e)Button map
    D11me._map_pin.onclick=function (e)Map hotspot—panorama PaF
    D12me._title_room_floor.onclick=function(e)Map title
    D13me._map_button_1.onclick=function(e)Map 1st floor
    D14me._map_button_2.onclick=function(e)Map 2nd floor
    D15me._imageicon_.onclick=function(e)Icon HD painting
    D16me._imageicon_.onclick=function(e)Icon 3D model
    D17me._thumbnail_active.onclick=function(e)Thumbnail panorama PaF
    D18me._polygon_hotspot_container_3d.onclick=function(e)Panel 3D model
    D19me._polygon_hotspot_container_hd.onclick=function(e)Panel HD painting
    D20me._button_look_closer.onclick=function(e)Button interact with 3D model
    D21me._button_artwork_file_hd.onclick=function(e)Button HD painting description
    D22me._button_description_hd.onclick=function(e)Button HD painting artist
    D23me._button_look_closer_hd.onclick=function(e)Button look closer HD painting
    D24me._button_info_middle.onclick=function(e)Button Home
    Table 1. The List of Selected Descriptors Related to the Interactive Elements in the Virtual Museum Application with the Corresponding JavaScript Codes

    A.2. Panoramas ROIs

    Figure 11 represents additional visualizations of the computed ROIs and their distribution over selected panoramas. The annotations that belong to represented panoramas are explained in the caption of a figure.
    Fig. 11.
    Fig. 11. The ROI obtained from all users for the selected panoramas. The images are labeled following a 3 \(\times\) 7 naming system from left to right: Pa \({}_{4}\) , Pa \({}_{5}\) , Pa \({}_{9}\) , Pa \({}_{10}\) , Pa \({}_{11}\) , Pa \({}_{12}\) , Pa \({}_{14}\) , Pa \({}_{17}\) , Pa \({}_{18}\) , Pa \({}_{23}\) , Pa \({}_{24}\) , Pa \({}_{28}\) , Pa \({}_{31}\) , Pa \({}_{32}\) , Pa \({}_{33}\) , Pa \({}_{34}\) , Pa \({}_{39}\) , Pa \({}_{40}\) , Pa \({}_{52}\) , Pa \({}_{56}\) , and Pa \({}_{65}\) .

    A.3. Captured Panorama Region Calculation

    We know that the maximum FOV value covers 180° panoramic image vertically when the zoom is at its highest value, \(100\%\) and that the height of the panorama is 8,192 pixels. That helps determine the height of the region from any user view and its location. The central point in the panorama image is defined in Section 4.1, Equation (4). Let \(h_{region}\) be the height of the panorama region that we obtain from \(h_{region}=\mathrm{FOV}\times(8,192/100)\) The output of the function that captures a gazed region is represented in Figure 12, where the input includes \(x\text{, }y\) , and \(\mathrm{FOV}\) in percents.
    Fig. 12.
    Fig. 12. Panorama region captured from the inputs \(x = 11\text{,}505\text{px} \text{, } y = 3\text{,}783\text{px} \text{, and } \mathrm{FOV} = \sim {18.79} \%\) . (a) The panoramic view from the virtual application when the visitor is observing the statue more closely and (b) the captured region and its central point (in green) on the panorama using our algorithm.

    References

    [1]
    Petroc Taylor. 2022. Digital Transformation - Statistics & Facts. Retrieved from https://www.statista.com/topics/6778/digital-transformation/#editorsPicks
    [2]
    UNESCO. 2020. Museums Around the World in the Face of COVID-19. Retrieved from https://unesdoc.unesco.org/ark:/48223/pf0000373530
    [4]
    Tula Giannini and Jonathan P. Bowen. 2022. Museums and digital culture: From reality to digitality in the age of COVID-19. Heritage 5, 1 (Jan. 2022), 192–214. DOI:
    [5]
    Alireza Gholinejad Pirbazari and Sina Kamali Tabrizi. 2022. Recordim of Iran’s cultural heritage using an online virtual museum, considering the coronavirus pandemic. Journal on Computing and Cultural Heritage (JOCCH) 15, 2 (Apr. 2022), 1–14. DOI:
    [6]
    Christofer Meinecke, Chris Hall, and Stefan Jänicke. 2022. Towards enhancing virtual museums by contextualizing art through interactive visualizations. ACM Journal on Computing and Cultural Heritage 15, 4 (Dec. 2022), 1–26. DOI:
    [7]
    Europeana. 2008. European Digital Library Cenl, Edl & Edlnet. Retrieved from https://www.eesc.europa.eu/sites/default/files/resources/docs/jill-cousins-cese-brussels-171007.pdf
    [9]
    European Commission, Directorate-General for the Information Society, Media, Maurice Lévy, Elisabeth Niggemann, and Jacques De Decker. 2011. The New Renaissance: Report of the Comité Des Sages on Bringing Europe’s Cultural Heritage Online. Publications Office, DOI:
    [10]
    Digital Meets Culture. 2011. Implementation of Commission Recommendation on the Digitisation and Online Accessibility of Cultural Material and Digital Preservation. Retrieved from https://www.digitalmeetsculture.net/wp-content/uploads/2019/06/ReportonCulturalHeritageDigitisationOnlineAccessibilityandDigitalPreservation.pdf
    [12]
    EC. 2018. Leeuwarden Declaration—Adaptive Re-Use of the built Heritage: Preserving and Enhancing the Values of our Built Heritage for Future Generations. Retrieved from URL https://culture.ec.europa.eu/sites/default/files/2020-08/swd-2018-167-new-european-agenda-for-culture_en.pdf
    [13]
    Florin Nechita and Catalina-Ionela Rezeanu. 2019. Augmenting museum communication services to create young audiences. Sustainability 11, 20 (2019), 5830. Retrieved from https://www.mdpi.com/2071-1050/11/20/5830
    [14]
    Google Inc. 2011. Google Arts & Culture. Retrieved from https://artsandculture.google.com/
    [15]
    Renato Angeloni, Roberto Pierdicca, Adriano Mancini, Marina Paolanti, and Andrea Tonelli. 2021. Measuring and evaluating visitors’ behaviors inside museums: The Co. ME. project. SCIRES-IT-SCIentific RESearch and Information Technology 11, 1 (2021), 167–178. DOI:
    [17]
    Yicheng Jiang, Xia Zheng, and Chao Feng. 2023. Towards multi-area contactless museum visitor counting with commodity WiFi. Journal on Computing and Cultural Heritage. 16, 1 (Mar. 2023), 1–26. DOI:
    [18]
    Yuji Yoshimura, Stanislav Sobolevsky, Carlo Ratti, Fabien Girardin, Juan Pablo Carrascal, Josep Blat, and Roberta Sinatra. 2014. An analysis of visitors’ behavior in the Louvre Museum: A study using Bluetooth data. Environment and Planning B: Planning and Design 41, 6 (2014), 1113–1131. DOI:. Retrieved from https://journals.sagepub.com/doi/10.1068/b130047p
    [19]
    Jonathan D. L. Casano, Jenilyn L. Agapito, Abigail S. Moreno, and Ma. Mercedes T. Rodrigo. 2022. INF-based tracking and characterization of museum visitor paths and behaviors using Bluetooth low energy beacons. Journal on Computing and Cultural Heritage 15, 2 (Apr. 2022), 1–22. DOI:
    [20]
    Eyal Dim and Tsvi Kuflik. 2014. Automatic detection of social behavior of museum visitor pairs. ACM Transactions on Interactive Intelligent Systems (TiiS) 4, 4 (Nov. 2014), 1–30. DOI:. Retrieved from https://dl.acm.org/doi/10.1145/2662869
    [21]
    Iva Vasic. 2022. Virtual Museum of the Civic Art Gallery of Ascoli Piceno in Italy. Retrieved from https://dhekalos.it/tour/iva/pinacoteca/index.html
    [22]
    Lorenzo Stacchio, Claudia Scorolli, and Gustavo Marfia. 2022. Evaluating human aesthetic and emotional aspects of 3D generated content through eXtended Reality. In Proceedings of the 2nd Workshop on Artificial Intelligence and Creativity Co-located with 22nd International Conference of the Italian Association for Artificial Intelligence (AIxIA 2023), CEUR Workshop Proceedings 3528, CEUR-WS.org 2023. 38-49. Retrieved from https://ceur-ws.org/Vol-3519/paper4.pdf
    [23]
    Elena Morotti, Lorenzo Stacchio, Lorenzo Donatiello, Marco Roccetti, Jari Tarabelli, and Gustavo Marfia. 2022. Exploiting fashion x-commerce through the empowerment of voice in the fashion virtual reality arena: Integrating voice assistant and virtual reality technologies for fashion communication. Virtual Reality 26, 871–884. DOI:
    [24]
    Suzanne Sarraf. 1999. A survey of museums on the web: Who uses museum websites? Curator: The Museum Journal 42, 3 (Jul. 1999), 231–243. DOI:. Retrieved from https://onlinelibrary.wiley.com/doi/full/10.1111/j.2151-6952.1999.tb01143.x
    [25]
    Darren Peacock. 2022. Statistics, Structures & Satisfied Customers: Using Web Log Data to Improve Site Performance. Retrieved from https://eric.ed.gov/?id=ED482104
    [26]
    Loris Barbieri, Fabio Bruno, and Maurizio Muzzupappa. Virtual museum system evaluation through user studies. Journal of Cultural Heritage 26 (Jul. 2017), 101–108. DOI:. Retrieved from URL https://www.sciencedirect.com/science/article/abs/pii/S1296207416303016
    [27]
    Bruno Fanini and Luigi Cinque. 2020. Encoding, exchange and manipulation of captured immersive VR sessions for learning environments: The PRISMIN framework. Applied Sciences 10, 6 (Mar. 2020), 2026. DOI:. Retrieved from https://www.mdpi.com/2076-3417/10/6/2026/html
    [28]
    Ramona Quattrini, Roberto Pierdicca, Marina Paolanti, Paolo Clini, Romina Nespeca, and Emanuele Frontoni. 2020. Digital interaction with 3D archaeological artefacts: Evaluating user’s behaviours at different representation scales. Digital Applications in Archaeology and Cultural Heritage 18 (2020), e00148. DOI:. Retrieved from https://www.sciencedirect.com/science/article/abs/pii/S2212054819301018
    [29]
    Roberto Pierdicca, Michele Sasso, Flavio Tonetto, Francesca Bonelli, Andrea Felicetti, and Marina Paolanti. 2021. Immersive insights: Virtual tour analytics system for understanding visitor behavior. In Augmented Reality, Virtual Reality, and Computer Graphics. L. T. De Paolis, P. Arpaia, and P. Bourdot. (Eds.), Lecture Notes in Computer Science, Vol. 12980, Springer Science and Business Media Deutschland GmbH, 135–155. DOI:. Retrieved from https://link.springer.com/chapter/10.1007/978-3-030-87595-4_11
    [30]
    Jeremy Tzi-Dong Ng, Weichen Liu, Xiao Hu, and Tzyy-Ping Jung. 2020 Evaluation of low-end virtual reality content of cultural heritage: A preliminary study with eye movement. In Proceedings of the ACM/IEEE Joint Conference on Digital Libraries in 2020 (JCDL ’20). Association for Computing Machinery, New York, NY, 365–368. DOI:
    [31]
    David Nicholas and David Clark. 2014. Information seeking behaviour and usage on a multi-media platform: Case study Europeana. In Library and Information Sciences. C. Chen and R. Larsen (Eds.), Springer, Berlin, 57–78. DOI:
    [32]
    Mafkereseb Kassahun Bekele, Roberto Pierdicca, Emanuele Frontoni, Eva Savina Malinverni, and James Gain. 2018. A survey of augmented, virtual, and mixed reality for cultural heritage. Journal on Computing and Cultural Heritage (JOCCH) 11, 2 (Mar. 2018), 1–36. DOI:
    [33]
    Laia Pujol, Maria Roussou, Stavrina Poulo, Olivier Balet, Maria Vayanou, and Yannis E. Ioannidis. 2012. Personalizing interactive digital storytelling in archaeological museums: The CHESS project. In Proceedings of the 40th Annual Conference of Computer Applications and Quantitative Methods in Archaeology (CAA ’12). 77–90. Retrieved from URL https://www.madgik.di.uoa.gr/sites/default/files/2018-06/caa2012_paper_final.pdf
    [34]
    Xiaobai Chen, Abulhair Saparov, Bill Pang, and Thomas Funkhouser. 2012. Schelling points on 3D surface meshes. ACM Transactions on Graphics (TOG) 31, 4 (Jul. 2012), 1–12. DOI:
    [35]
    Iva Vasic, Aleksandra Pauls, Adriano Mancini, Ramona Quattrini, Roberto Pierdicca, Renato Angeloni, Eva S. Malinverni, Emanuele Frontoni, Paolo Clini, and Bata Vasic. 2022. Virtualization and vice versa: A new procedural model of the reverse virtualization for the user behavior tracking in the virtual museums. In Extended Reality. Lucio Tommaso De Paolis, Pasquale Arpaia, and Marco Sacco. (Eds.), Lecture Notes in Computer Science, Vol. 13446, Springer Nature Switzerland, Cham, 329–340 Retrieved from https://link.springer.com/chapter/10.1007/978-3-031-15553-6_23
    [36]
    Inc FingerprintJS. Fingerprintjs. 2021. Retrieved from https://github.com/fingerprintjs/fingerprintjs
    [37]
    Giuseppe Resta, Fabiana Dicuonzo, Evrim Karacan, and Domenico Pastore. 2021. The impact of virtual tours on museum exhibitions after the onset of COVID-19 restrictions: Visitor engagement and long-term perspectives. SCIRES-IT-SCIentific RESearch and Information Technology 11, 1 (2021), 151–166. Retrieved from https://repository.bilkent.edu.tr/server/api/core/bitstreams/39497aab-89f7-4fba-99a8-5a7bae311193/content

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image Journal on Computing and Cultural Heritage
    Journal on Computing and Cultural Heritage   Volume 17, Issue 3
    September 2024
    230 pages
    ISSN:1556-4673
    EISSN:1556-4711
    DOI:10.1145/3613582
    Issue’s Table of Contents
    This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives International 4.0 License.

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 24 June 2024
    Online AM: 06 April 2024
    Accepted: 15 March 2024
    Revised: 20 November 2023
    Received: 31 January 2023
    Published in JOCCH Volume 17, Issue 3

    Check for updates

    Author Tags

    1. User behavior
    2. virtual reality
    3. regions of interest
    4. virtual museum

    Qualifiers

    • Research-article

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • 0
      Total Citations
    • 340
      Total Downloads
    • Downloads (Last 12 months)340
    • Downloads (Last 6 weeks)161
    Reflects downloads up to 27 Jul 2024

    Other Metrics

    Citations

    View Options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Get Access

    Login options

    Full Access

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media