1 Introduction

Emerging technologies often experience cycles of “summer” and “winter.” In summer, expectations grow, new technology emerges, and revolutionary change seems imminent. In winter, expectations are tempered, investment cools, and attention moves to other topics (Floridi, 2020). Extended reality (XR) recently experienced a very hot summer, with the global XR market growing 24.9% in 2022 to $25.2 billion (Alsop, 2022), before economic headwinds and technological difficulties led some major companies to scale back their XR ambitions (Lü, 2023; Miller, 2023; Thorbecke, 2023; Whelan & Flint, 2023). However, XR technologies promise new, different, and better experiences across many domains (Floridi, 2022), and development continues. Regardless of market conditions, XR technologies significantly threaten fundamental rights. The current lull in hype provides a time window to assess risks and consider early regulation. In this article, we intend to aid ongoing regulatory efforts by analyzing risks to safety and privacy posed by a subset of emerging XR technologies—immersive extended reality (IXR), which encompasses immersive VR and MR environments—and by formulating some policy recommendations addressed to European Union (EU) legislators on how to regulate these technologies effectively. This objective requires three clarifications.

First, we assume that regulation should primarily focus not on a specific technology—even if it is an important factor that needs to be considered—but on the kinds of experiences that technologies enable (Floridi, 2020). This approach mitigates the risk that regulation would quickly become outdated. It informs our broad focus on aspects of extended reality, rather than specific XR technologies. XR is a spectrum that includes virtual reality (VR, when users are immersed in a virtual environment, often with a headset), augmented reality (AR, where virtual information is overlaid on the physical world), and mixed reality (MR, which encompasses both AR and the use of the physical world to augment the virtual) (Milgram & Kishino, 1994). Within this spectrum, IXR includes experiences such as social “metaverse” platforms, VR games, and work environments, but excludes non-immersive experiences such as “desktop” VR. It also comprises aspects of MR/AR where users are “immersed” in a context wholly mediated by a device, such as using glasses that overlay information onto the user’s field of vision.Footnote 1 The term IXR goes beyond the EU’s definition of “virtual worlds” as “persistent, immersive environments,” which covers non-persistent and AR contexts (European Commission, 2023), to include also standalone spaces, such as virtual offices. However, it excludes applications such as the overlay of information on television sports broadcasts or smartphone AR, as these only mediate part of a user’s world and thus are not immersive. Inevitably, many of our policy recommendations will also apply to other XR applications, including non-immersive social metaverse platforms. However, non-immersive technologies have been around for longer and thus are better regulated than emerging immersive technologies; including threats and legal analysis specific to them would render the scope of the paper overbroad. We focus on immersive technologies because they pose novel threats to fundamental rights, channeled through two main avenues: by amplifying the psychological and physiological impacts of virtual experiences and by enabling the increased collection of personal data, particularly sensitive and biometric data.

Second, many issues in IXR governance demand attention, like competition, liabilities, financial transactions, cybersecurity, health, accessibility, and inclusiveness (Madiega et al., 2022). Here, we focus exclusively on safety and privacy because they are among the most critical aspects implicating the protection of fundamental rights and the quality of experiences in IXR and must be addressed early. Safety is essential to having a good experience in IXR; privacy issues are relevant to both IXR users and non-users. Furthermore, biometric data collection may be fundamental to IXR platforms’ business models and thus must be addressed now. The importance of safety and privacy is reflected in several EU policy documents that foreground these rights, as well as the Charter of Fundamental Rights of the European Union (“the Charter”) and the European Convention on Human Rights (ECHR). The EU has also acknowledged their importance in XR and to facilitate other rights. “Online privacy and safety” is a crucial pillar of the EU “Digital Decade” initiative (European Commission, 2021), and both are included in the European Declaration on Digital Rights and Principles for the Digital Decade.Footnote 2 These high-level goals manifest in a European Parliamentary Research Service (EPRS) report on the “metaverse” (Madiega et al., 2022), which highlights the importance of physical and mental health issues and data privacy, while the July 2023 “EU initiative on Web 4.0 and virtual worlds” recognizes challenges to “personal data and privacy,” cybercrime, and cyber violence (European Commission, 2023).

Third, although we provide recommendations to EU policymakers, our map of safety and privacy risks and some of our recommendations may also apply to other jurisdictions. Because IXR is a global and pan-jurisdictional phenomenon, we hope this article will contribute to a more extensive discussion of how safety and privacy protections can be harmonized in other contexts. Still, we address this article to EU legislators because they are moving towards proactive regulation of XR, which could also have significant implications for other jurisdictions’ governance. In only one year, the EU moved from releasing an EPRS briefing on the “metaverse” and proposing a “metaverse amendment” to the forthcoming Artificial Intelligence Act (AI Act) (Bertuzzi, 2022) to hosting Citizens’ Panels and launching a regulatory initiative aimed at developing a non-legislative framework to uphold EU values in “virtual worlds” (Joint Research Centre, 2023). The regulatory initiative’s strategy on “Web 4.0 and virtual worlds” calls on the EU to be an early mover in development and regulation (European Commission, 2023). To analyze whether current legislation is fit for purpose, the Committee on the Internal Market and Consumer Protection (IMCP) published the European Parliament’s first draft motion on “virtual worlds” highlighting risks and urging “fitness checks” to see how existing legislation is coping with new developments (Grady, 2023; IMCP, 2023), while the Committee on Legal Affairs (JURI) published a report on policy implications of virtual worlds (JURI, 2023) and the European Commission Joint Research Centre published a report on the challenges of “next generation virtual worlds” (Hupont et al., 2023).

While existing initiatives focus on non-legislative solutions, it is reasonable to anticipate that the EU will pass legislation on IXR in the near future.Footnote 3 At least some aspects of EU regulation on IXR will be “exported” to other markets by companies and governments who follow EU regulation because of its regulatory competency and market size—the so-called “Brussels Effect” (Bradford, 2020). The private sector is likely to play a crucial role in translating regulations to practice, and we hope that IXR companies anticipating our recommendations in their own self-regulation and codes of practice will help fulfill their human rights obligations (United Nations Human Rights Office of the High Commissioner, 2011) and avoid potentially disruptive adaptations when legislation is passed.

Let us turn now to the structure of the article. Section 2 explains our methodological approach. Section 3 outlines the theoretical conception of safety and privacy grounding the rest of the article. Sections 4 and 5 use historical VR literature and the most recent wave of XR research to discuss safety and privacy risks in IXR. Section 6 discusses how some extant EU legislation succeeds or fails in mitigating those threats. Section 7 outlines our recommendations to legislators; Section 8 concludes the article.

2 Methodology

This article aims to map the landscape of XR risks to inform policymakers, rather than comprehensively classifying or ranking the likelihood of all possible risks. To achieve this, we employ an iterative narrative literature review (Jahan et al., 2016). The initial search phase used Google Scholar’s search function to identify relevant articles based on combinations of XR-related terms and privacy and safety keywords. Initial search terms included:

  • Extended reality privacy

  • Extended reality safety

  • Virtual reality privacy

  • Virtual reality safety

  • Augmented reality privacy

  • Augmented reality safety

  • Mixed reality privacy

  • Mixed reality safety

We read the titles and abstracts of returned articles and filtered them for relevance based on whether they substantively discussed privacy or safety related to IXR. Our inclusion criteria for risks are those that have manifested in the physical world or existing Internet and are technologically plausible considering the current technological trajectory of IXR, as well as novel risks that are technologically plausible. Articles speculating about more remote or theoretical risks of IXR (such as those relying on yet-to-be-developed technology or not grounded in an accurate understanding of IXR technology and development) were excluded, as discussing these would distract from the policy-oriented goals of this paper. Where appropriate, additional articles and documents were added ad hoc to provide further context to the examples identified in the literature search.

3 Conceptualizing Safety and Privacy

Safety has been a concern since the early days of VR development, when studies focused mainly on the physical effects of VR. This made sense when headsets weighed four kilograms and often caused severe discomfort (Costello, 1997; Wilson, 1996). Now, additional risks are emerging regarding mental safety and social stability. We use a three-part definition of “safety” encompassing physical, mental, and social elements, which is informed by the EU’s conceptualization of the term.Footnote 4 The rights to physical and mental safety derive from Article 3 of the Charter, and the EU has begun to address these rights in the digital context with measures aimed at ensuring the safety of hardware and software. Historically, product safety legislation, like the 1985 Product Liability Directive,Footnote 5 has focused on preventing physical harm and material damage. Recently, the European Council and Parliament have begun acknowledging the mental aspects of safety in product liability legislation; proposed updates to the Product Liability Directive would allow individuals to claim damages for psychological harm (De Luca, 2023). The Digital Services Act (DSA)Footnote 6 includes provisions protecting mental and physical health. It addresses harassment, hate speech, discrimination (Recital 40), and “serious negative consequences to a person’s physical and mental well-being” (Recital 83). It also begins tackling the threat of digital technology to social stability. Although social stability is not construed as a fundamental individual right, because living in a safe and stable society is arguably necessary for proper physical and mental safety, EU legislation works to promote it. One of the DSA’s fundamental premises is that diverging national laws on “illegal content, online disinformation, or other societal risks” negatively affect the internal market (Recital 2); it goes on to outline the systemic risks that platforms must address to ensure that fundamental rights are protected, implying that social stability is an important facilitator of individual rights.

In digital contexts, privacy (enshrined as a fundamental right in Articles 7 and 8 of the Charter, and Article 8 of the ECHR) has primarily been viewed in the context of communications and personal data protection (Renieris, 2023). However, privacy also encompasses aspects of one’s physical being, home, and lifestyle. As an immersive and often embodied experience, IXR brings elements of physical privacy into the virtual domain. It facilitates the flow of information within our broader information environment; in more philosophical terms, IXR tends to lower the ontological friction in the infosphere (Floridi, 2005). Thus, it cannot be regulated solely as a matter of data and communications privacy.

In the early days of VR and AR, privacy was often an afterthought or disregarded. Jaron Lanier (who coined the term “virtual reality”) cautioned: “If there’s a total acceptance of the right to privacy, there’s also a danger of too much isolation developing in the long term” (Lanier & Biocca, 1992). At the same time, some argued that VR would facilitate “strong privacy” through encryption (Friedman, 1996). These reflect two aspects of privacy: that of the body or self and that of communications. Recent IXR research—including studies examining privacy in “proto-metaverses” like Second Life (Leenes, 2008)—focuses more on data privacy and physical privacy, likely because of increased commercialization since 2010 (Kulal et al., 2022); see (Abraham et al., 2022; Bagheri, 2017; Bavana, 2021; Falchuk et al., 2018; Huang et al., 2022; Martin, 2022; Sethi, 2022; Spiegel, 2018). We draw on all of these aspects of privacy to provide a comprehensive overview of the risks below.

Unlike safety, privacy lacks a unique EU legislative framework conceptualizing its different aspects in digital contexts. The General Data Protection Regulation (GDPR)Footnote 7 offers a framework for data protection. However, privacy concerns in IXR extend beyond data protection. Thus, we adopt Beate Roessler’s definition and taxonomy of privacy: “Something counts as private if one can oneself control the access to this ‘something’. Conversely, the protection of privacy means protection against unwanted access by other people” (Roessler, 2005, 8). This conception of privacy applies across three dimensions, or “possibilities for exercising control over ‘access’”: informational privacy, decisional privacy, and local privacy (Roessler, 2005, 9). It covers data protection, communications, and embodied aspects of privacy, and also corresponds to interpretations of Article 8 of the ECHR, which involves the home (local privacy); correspondence, image and reputation protection, surveillance issues, health information, and data protection (informational privacy); and family life, physical/psychological/mental integrity, and identity and autonomy issues (decisional privacy) (European Court of Human Rights, 2022). Roessler’s three-pronged definition allows us to simplify our taxonomy while hewing close to the EU context.

4 Threats to Safety

In this section, we outline the main threats to safety posed by IXR as identified by our narrative literature review. We consider amplifications of existing harms and those novel and unique to IXR.

4.1 Threats to Physical Body

The physical bodies of users of IXR immersed in a virtual environment are still involved in the experience and thus are potentially at risk. We classify the physical harms of IXR into two categories: incidental and intentional.

Incidental harms arise during the normal use of IXR technology, without any malfunctioning or interference. For instance, “cybersickness” is a well-documented side effect of using VR headsets; symptoms include nausea, headaches, fatigue, and vomiting (Stephen et al., 2020). Since the 1990s, it has been known to affect women disproportionately (Hayles, 1996; Jasper et al., 2020). Potential reasons include increased susceptibility to motion sickness, greater postural instability, and the interpupillary distance (IPD) of VR headsets, often calibrated to the typical male IPD range (Kelly et al., 2023). This is the first example of how IXR disparately affects specific groups, which could be exacerbated if activities in IXR become widely adopted and/or mandatory (for example, in work environments), although techniques are being developed to address it (Ang & Quarles, 2023). Regarding acute physical injury, IXR headsets also often obscure users’ views of their surroundings, which could cause collisions with nearby objects, pets, or bystanders (Needleman & Rodriguez, 2022). IXR devices also often contain electronics close to users’ heads, which could cause serious bodily injury or brain damage if malfunctional (Bagheri, 2017), although product safety standards seem to have prevented this so far.

Intentional physical harm may follow if devices are hacked to cause malfunction (Yuntao et al., 2022) or if malicious individuals or applications alter users’ perception and lead them into dangerous situations (Abraham et al., 2022). Users could suffer physical harm if they are targeted by other users—for example, by “strobing” or “startling” epileptic or otherwise vulnerable usersFootnote 8 (Lemley & Volokh, 2018). Hacking is already illegal according to laws implemented under Directive 2013/40/EU,Footnote 9 but these malicious user actions are an additional avenue in technology-facilitated physical assault.

4.2 Threats to Mental Health

Because experiences in IXR trigger the same nervous system and psychological responses as experiences in the physical world (Parsons et al., 2009), psychological harm to users in virtual environments can cause genuine distress and suffering. We consider harms perpetuated by other IXR users before moving on to those perpetuated by IXR platforms and technologies. Some may have associated physical effects, but we categorize them based on their primary impacts.

Online harassment could be exacerbated in IXR because of its immersive nature and the unavoidable presence of identity signals. Physical harassment (bodily interference with an avatar) and verbal harassment are already proving especially problematic in IXR (Outlaw, 2018), although Blackwell et al. (2019) also raise the possibility of environmental harassment using the affordances of VR worlds. The Center for Countering Digital Hate identified one violating incident in VRChat every seven minutes (Frenkel & Browning, 2021). Sexual harassment is especially prevalent, but platforms struggle to proactively address it. Meta and QuiVr only introduced “personal boundary” features after women publicized how other users groped them (Basu, 2021). However, Meta’s boundary, which was intended to “establish standard norms for how people interact in VR” (Robertson, 2022), can now be turned off (Perez, 2022). As in the physical world, observable identity signals—e.g., of age, gender, sexuality, race, and disability statusFootnote 10—are used to target verbal and physical harassment (Blackwell et al., 2019). A 2018 survey of VR users found that 49% of female respondents had experienced sexual harassment, while 30% of male respondents had experienced racist or homophobic harassment (Outlaw, 2018). Stereotyping may be increased because online presentations are generally less nuanced than offline presentations (Axelsson, 2002, 198). Thus, individuals’ experiences may be significantly worse depending on how they present themselves. In addition to historically marginalized groups, children are another group of concern. If IXR becomes widespread among youth, it could endanger children’s mental health by exacerbating the impacts of cyberbullying to resemble physical bullying. 37% of American children ages 12–17 have been cyberbullied (Patchin, 2019), but children rarely talk to adults about bullying online (Reed & Joseff, 2022).

The subsequent three concerns are more speculative, but grounded in plausibility. IXR offers a new avenue for cyberstalking (Canbay et al., 2022; Falchuk et al., 2018; Sethi, 2022). Stalkers embodied in avatars could make their targets feel even more threatened due to the feeling of physical presence. Cyberstalking causes real psychological damage (Chemaly, 2014) and can spill into the physical world (potentially utilizing AR functionalities to track people), endangering physical and mental safety.

IXR could open new avenues for financial and identity fraud, causing emotional and dignitary damages (Merritt, 1989) via “social engineering hacking” (Falchuk et al., 2018) and other kinds of phishing.Footnote 11 Avatar identity theft could enable impersonation and fraud (Yuntao et al., 2022), but also inflict emotional trauma if users identify with their avatars (Michael & Metzinger, 2016).

Concerningly, deepfake avatars or characters in immersive experiences may be used as “non-consensual virtual sexbots” (Kalpokas & Kalpokienė, 2023, 100) or in highly realistic “revenge pornography.” Revenge pornography is currently created using non-immersive deepfake technology, but immersive revenge pornography is likely to follow (ibid., 105). Exploiting “body-to-avatar rendering data”—or simply clever design—could create deepfake avatars for an even more violating kind of revenge pornography (ibid., 100). 98% of all deepfake videos online are nonconsensual pornography, 99% of which is of women (Home Security Heroes, 2024). This could create situations where deepfake avatars perform vulgar or defamatory actions in widely accessible VR spaces (or projected in AR). Deepfake pornography could cause victims emotional harm and reputational damage, creating both acute and long-term harms.

Moving on to harms facilitated by IXR platforms and technologies themselves, IXR could exacerbate psychological disorders. Traditional social media has been linked to eating disorders and self-harm (Jacob et al., 2017; Turner & Lefevre, 2017), with an especially grave impact on children and teenagers (Wells et al., 2021). Since images have a greater potential to trigger self-harm than text (Jacob et al., 2017), immersive pro-self-harm or eating disorder content (or immersive environments promoting harmful behavior) could feasibly be even more dangerous, but should be researched more.

Another under-investigated issue is how IXR could encourage alcohol misuse. There is a body of literature on how VR can be used to assess and treat alcohol use disorders (Ghiţă & Gutiérrez-Maldonado, 2018), but thus far, no studies on how it could exacerbate alcohol misuse. However, anecdotal reports suggest that the nightlife—which in reality is 24/7 because of IXR’s global nature—in some social IXR platforms, especially VRChat, encourages people to drink while physically alone, even to the point of alcohol poisoning (The Virtual Reality Show, 2022; Visual Venture, 2023). Some people report that VR causes them to drink more and that it is difficult to know how drunk they are when sitting down wearing a headset (The Virtual Reality Show, 2022). In IXR, excessive consumption of alcohol may be perceived to be safer because specific physical hazards, like driving, are removed, but it introduces new risks. While IXR may not directly cause these problems, how it can promote alcohol abuse should be investigated.

While it is more speculative, there is evidence that IXR could trigger psychological disorders based on its potential for unhealthy engagement and addiction. Such addiction is seen in 2D gaming (WHO Team, 2020). Studies of compulsive VR use are limited but suggest that addiction rates are currently similar to traditional gaming and social media use but that the affordances of VR positively predict addiction, meaning that as embodiment and immersion increase, so too might addiction (Barreda-Ángeles & Hartmann, 2022). It is also hypothesized that using biometric data to target and refine experiences could increase engagement and addiction (O’Brolcháin et al., 2016). Clinically, “gaming disorder”Footnote 12 and depersonalization and derealization dissociative disorders are associated with 2D gaming (De Pasquale et al., 2018; WHO Team, 2020) and could be more prevalent in IXR. Subclinical “video game addiction” can also be harmful (Digital, Culture, Media and Sport Committee, 2019), and readjustment difficulties have been reported when exiting virtual worlds (Michael & Metzinger, 2016; Spiegel, 2018). Additionally, online games can encourage excessive spending, particularly by children and cognitively disabled users (Kleinman, 2019). In immersive contexts, AR advertising has been shown to increase customers’ willingness to pay (Pozharliev et al., 2022). An immersive context may also add a sense of “unreality” (virtualization or gamification) of financial consequences. Together, these factors can create harmful consumption environments that consumers are not adjusted to.

Compulsive IXR use can also have physical effects in addition to mental and financial ones. Bodily neglect is associated with gaming addiction, and parents suffering from video game addiction have neglected their own physical health and their children’s (Spiegel, 2018). Furthermore, people in fits of “gamer rage” have injured or killed children (Michael & Metzinger, 2016). While these issues are not unique to IXR, they could be exacerbated if IXR proves to be more addictive than traditional online interactions.

Children’s vulnerability extends beyond cyberbullying and bystander impacts, as IXR may interfere with children’s development and well-being. Exposure to sexually explicit media in early adolescence is related to risky sexual behavior in early adulthood (Lin et al., 2020). Already, IXR platforms host adult content. Age gating measures are often ineffective—a reporter discovered children in a virtual strip club displaying explicit content (Campoamor, 2022)—and IXR’s immersive and interactive nature could endanger children. Minors have reported being groomed and forced to perform virtual sex acts (Crawford & Smith, 2022). There could also be subtler impacts on development. Children in a VR experience displayed worse impulse control than children using two-dimensional video (Bailey et al., 2019), and VR can implant false memories in children (Segovia & Bailenson, 2009), although the long-term effects of this are unknown. Children tend to perceive conversational agents—even disembodied ones like Amazon’s Alexa and Apple’s Siri—as “alive” (Lovato et al., 2019), but treat them as “servants” and use tones not appropriate for interpersonal communication (Bylieva et al., 2021).Footnote 13 Future research should investigate whether IXR could lead to harmful, parasocial relationships and how that could affect children’s ability to function in physical society or disrupt kinematic development (Miehlbradt et al., 2021).

The final issue in this section is platforms’ direct manipulation of users’ psychological states. Tactics similar to those used in the Facebook “emotional contagion” experiment, where the platform manipulated users’ emotional states by tweaking the amount of positive and negative content in their news feeds (Del Vicario et al., 2016), could influence users’ moods and behaviors. That impact could be amplified using biometric tracking, emotion capture, and brain-computer interfaces (O’Brolcháin et al., 2016). While experiments are permissible under specific ethical principles (Polonioli et al., 2023), informed consent cannot be obtained via a clause buried deep in the terms of service (Koops, 2014), which would significantly violate user autonomy.

4.3 Threats to Social Stability

Threats to social stability can be split into several categories: threats to social order, security, and democracy. Though further study is required, the large-scale impacts of some of the issues mentioned above could affect social order. One deserving special attention is the normalization of harassment. Harassment in social IXR experiences risks creating a society on- and offline where specific people do not feel welcome, and it could become more widespread and harmful than in the traditional Web, as embodied identity markers are more accessible to observe in IXR. While changing an avatar’s identity signals can mitigate harassment, it comes at a cost to the freedoms of personality and expression of the victim. If harassment becomes normalized like toxic behavior has in the gaming community (Beres et al., 2021), IXR could create a virtual community that embeds and encourages bias and discrimination against already-marginalized communities, which could then increase such bias and discrimination across the Internet (Schmitz et al., 2022) and in the physical world (Chan et al., 2016). All this would further exacerbate the digital divide within communities.

IXR presents novel, albeit unrealized, security risks via the unique opportunity for extremist recruiting (Doctor et al., 2022; O’Brolcháin et al., 2016), training (Yuntao et al., 2022), and indoctrination (Michael & Metzinger, 2016). Groups such as ISIS already use traditional social media for recruiting (Awan, 2017), and just as the US Navy has found VR effective for recruitment and training (Chang, 2018), so might terrorist groups. Furthermore, immersive environments could act as a “virtual office” facilitating coordination between individuals who may be prevented from traveling by sanctions or conflict. Terrorists could use IXR to build AR or VR training scenarios, perhaps using a “digital twin”—a highly realistic digital replica—of a potential target (Doctor et al., 2022; World Economic Forum, 2022), putting people at risk of injury and even death in an attack.

Like social media, IXR, if widely adopted, could destabilize democratic institutions by altering our perception of reality and interactions with each other. Social media have been linked to political polarization (through both exposure to partisan content and uncivil political exchanges) and the spread of mis- and disinformation, negatively impacting the stability and norms of political institutions (Tucker et al., 2018). IXR could exacerbate both problems through “Reality Distortion Filters” (Zallio & John Clarkson, 2022). Instead of selecting what content you scroll past on a social media screen or what ads you see on a sidebar, algorithms could select what billboards you see, what objects appear around you, and even what AI-powered avatars (“Artificial Avatars”) you encounter, whether in a virtual context or augmenting the physical world. These interactions could be continuously adjusted based on the users’ micro-reactions, offering a potent tool of persuasion (Rosenberg, 2022b). In traditional social media, one can easily access content beyond what is targeted to them. However, targeting in IXR could result in two avatars in the same virtual or physical location seeing completely different things. When, say, one user sees advertisements for one soft drink, and another sees advertisements for a different soft drink, this could be relatively innocuous, but when one is surrounded by content promoting a conspiracy theory and the other is not, there is a concerning incongruity, difficult to monitor, that endangers both users. Resolving political differences becomes even more difficult when users do not know that their baseline realities may differ. The entire immersive reality can become individually tailored; polarization thus transcends users’ social media feeds to pervade their perceived realities.

4.4 Conclusion

This section has discussed the safety threats of IXR, including incidental and intentional threats to the physical body; threats to mental health caused by other users, IXR technologies, and IXR platforms; and threats to social stability through impacts to social order, security, and democracy. Particularly concerning are the increased threats to vulnerable groups, including children, who are more vulnerable to harms in immersive environments and from using immersive technologies, and individuals of marginalized identities, who are less likely to be able to access IXR and more likely to be harassed within it. Possible mitigations will be discussed in Sect. 7.

5 Threats to Privacy

Roessler’s taxonomy describes privacy violations as “illicit surveillance,” “illicit interference in one’s actions,” or “illicit intrusions in rooms or dwellings” (Roessler, 2005, 9). In IXR, we will be considering virtual actions and dwellings in addition to physical ones, but this does not make violations any less concerning. In some ways, the potential infringements are more severe because surveillance can be built into the fabric of IXR itself.

5.1 Informational Privacy

Informational privacy is the “right to protection against unwanted access in the sense of unwanted interference in personal data about themselves” (Roessler, 2005, 9). “Personal data” refers to information about oneself, as well as control over one’s self-presentation and the “right to be left alone” and not have every action, even in public settings, scrutinized, in order to facilitate an “authentic life” (Roessler, 2018, 138–139). Informational privacy issues are not unique to IXR; however, the biometric data that IXR devices can collect magnifies privacy and data protection issues.

IXR devices and platforms can collect an enormous amount of biometric data relating to an individual’s physical, physiological, or behavioral characteristics. While we will primarily discuss how this impacts individual privacy, data collectors can aggregate and anonymize such data using so-called “privacy-enhancing technologies” before using them to derive insights about human behavior for the same ends as individual data collection (including for targeting and personalization), creating concerns for group privacy (Renieris, 2023, 105; Floridi, 2017). IXR devices can track physical movements like facial expressions, eye movements, gestures, gait, and posture; breathing patterns; voice and faceprints; haptic responses; and environmental data like location, background, and surrounding noise or visuals (Pahi & Schroeder, 2023). Because many IXR platforms are partly, if not primarily, funded by advertising, they can exploit biometric data to target advertisements through a process dubbed “biometric psychography” (Heller, 2020). Tracking can be embedded in platform operation, which has been called “surveillant physics” (McStay, 2023) and facilitates a “[totalization] of surveillance” (Kalpokas & Kalpokienė, 2023, 21–22). Biometric data can be aggregated to create an individual “kinematic fingerprint” (Spiegel, 2018). While much of these data are generally considered non-identifiable, as Schroeder (Schroeder, 2010, 235) predicted, these data are so complete that users can be uniquely identified with high accuracy based on just seconds of IXR movement data (Nair et al., 2023), meaning that conceptions of personal and non-personal data require revision. The GDPR definition of biometric data only covers data that can uniquely identify an individual and offers face and fingerprints as examples (Article 4). However, as technology advances, so do identification techniques. Since pieces of otherwise non-identifiable data can be aggregated to uniquely identify individuals, most data “relating to the physical, physiological or [behavioral] characteristics of a natural person” (Article 4 GDPR) could be considered biometric data under the GDPR.

Besides revealing an individual’s identity, biometric data and other XR data can infer sensitive or protected characteristics (Abraham et al., 2022; Bagheri, 2017), including health conditions such as Alzheimer’s disease (Fristed et al., 2022), and monitor affective state and cognitive processes (Abraham et al., 2022), which builds on surveillance concerns. Some suggest that eye tracking and voice analysis can reveal information about identity, personality, emotions, drug consumption, socioeconomic status, and health (Kröger et al., 2020). While the scientific robustness of many of these technologies is questionable (Roberts, 2022), they could have ramifications in the physical world. If, for example, a person working in a homophobic environment was inferred to be gay from biometric data or behavioral observation in IXR (Logan, 2018; Rupp & Wallen, 2007), disclosing that information—regardless of its accuracy—could cause professional ramifications. Moreover, misleading inferences based on incorrect data could cause adverse discriminatory or health outcomes, speaking to the importance of facilitating user access to their personal data.

Awareness of constant surveillance may limit how comfortable people feel expressing themselves in IXR and their ability to explore different identities. Users may conceal some private information, but one’s biometric data and involuntary reactions cannot be concealed from platforms (Heller, 2020). Surveillance can have “chilling effects” where individuals self-censor behavior, which impacts freedom, creativity, and self-development (Solove, 2006). For example, after revelations about the US National Security Agency’s (NSA) mass surveillance emerged, Internet traffic to sensitive Wikipedia articles decreased (Penney, 2016).Footnote 14 This empirical evidence shows that individuals need to be aware of surveillance for it to have a chilling effect. Studies indicate that IXR users are often unaware of how many data are collected about their activities and interactions in IXR (Abraham et al., 2022), partly because of terms and conditions designed to keep them uninformed (O’Brolcháin et al., 2016). However, as the post-NSA chilling of Wikipedia traffic suggests, once users become aware of the existence and extent of data collection, they may modify their behavior, possibly becoming less willing to use avatars that accurately represent their identity, engage in specific activities, or to use IXR devices in specific places. In addition to covert surveillance, overt interrogation can also cause behavioral chilling. Interrogation could involve excessive requests for user information by the platform—either on signup or during use—or users badgering others with personal questions. If users feel pressured to provide information, even if they refuse, it is still a discomfiting invasion of privacy (Solove, 2006), and children may be more prone to oversharing personal information (Reed & Joseff, 2022). Overall, chilling effects will impact everyone uncomfortable with surveillance, but especially those who need to keep some aspect of their identity private, including activists and people exploring their identity.

Surveillance can also be performed by other IXR users, workplaces and schools, and hackers. Like stalking, individuals in IXR could follow others around virtual worlds and observe their activities. Alternatively, they could exploit technological means, such as the “bugs” used in the Second Life platform to monitor others’ conversations (Leenes, 2008) or recording functionalities (Blackwell et al., 2019), some of which might be built into the platform. If people work in IXR environments (such as Meta’s Horizon Workrooms) or use work-provided devices (such as an AR device to provide guidance in a warehouse), employers could monitor employees’ physiological data and use it in performance evaluation—for example, using eye tracking as a proxy for attention—and hiring or firing decisions (Madiega et al., 2022). The same could be done for online schooling, extending surveillance into private virtual and physical spaces. The resulting biometric datasets represent a treasure trove for hackers, who could access stored biometric data or IXR equipment, including recording devices used for motion capture (O’Brolcháin et al., 2016). This creates new opportunities for identity theft, blackmail, and other fraud. Furthermore, the sensitivity of biometric data means that its breach would be uniquely damaging to users’ physical and mental safety, as they would know that they are more vulnerable to identity theft and other ramifications, and that a platform entrusted with their data—or, worse, one that collected it surreptitiously—had allowed it to leak (Solove, 2006).

5.2 Decisional Privacy

Decisional privacy concerns the freedom from unwanted interference in decisions and actions (Roessler, 2005, 9). It covers privacy of the body, personal relations, and decisions regarding them (Roessler, 2018, 139), all of which relate to IXR.

An issue related to, but distinct from, individual biometric data privacy is the privacy of avatar movements, specifically, where an avatar goes and when, as well as who they choose to interact with, or with whom they are sharing an experience. Using a Web browser together, for example, to watch a movie or do some shopping, does not present the same risk. People often choose to be anonymous online using private browsers and/or anonymous profiles to explore aspects of themselves that they wish to keep private (Lauri & Farrugia, 2020). However, currently, there is no “incognito mode” in IXR, even if no identity verification is required, because biometric data can link “burner” avatars to the owner’s primary account and even to their physical person. Therefore, a platform and other interested parties can always determine where an avatar or person goes and how they behave, threatening user autonomy and self-expression.

Another threat to autonomy is the possibility of platforms using individual micro-reactions to “nudge” users to take actions or make decisions they would not otherwise have (Rosenberg, 2022b). Regardless of the significance of the decision made, this kind of artificial influence via feedback loop is an enormous violation of individual decisional privacy and autonomy, especially when it exploits knowledge of personal preferences—potentially inferred from IXR data—that makes people more “nudgeable” (de Ridder et al., 2022). Violations could also result from automated decision-making using IXR data. For instance, employers could monitor attentiveness using eye-tracking data and feed it into an employee’s annual review. Regardless of whether such data leads to accurate inferences about individuals (Roberts, 2022), the algorithms used to make these inferences may be biased against specific groups. For example, facial recognition historically performs poorly for women and people with darker skin tones because training datasets are often skewed towards men and people with lighter skin tones (Buolamwini & Gebru, 2018), and thus they may have worse outcomes in these assessments (Pahi & Schroeder, 2023). Decisional privacy enshrines the idea that individuals should be free to make decisions about their lives and bodies as they see fit, but IXR could subject users to automated decision-making that infringes on that.

5.3 Local Privacy

Local privacy is the right to have a space where one can “do just what [they] want, unobserved and uncontrolled.” It involves solitude and the protection of family communities and relationships (Roessler, 2018, 140). While this has historically only applied to the physical world, people will need the same protection in immersive worlds because the same concerns about observation and lack of privacy apply in IXR, if not even more so. Just as in physical reality, people in VR may desire a virtual space where they can be alone or limit who else can access it, such as the “estates” of the non-immersive virtual world Second Life (Leenes, 2008). A lack of such spaces could render the entire IXR a “global village” where everyone’s business is public (O’Brolcháin et al., 2016). That said, even the implementation of spaces providing privacy from other users would not be truly private if users still feel subject to platform surveillance.

This surveillance also threatens physical local privacy. One would not feel comfortable at home if they knew that their every movement was being recorded and datafied. This scenario is an actual concern, as IXR devices gather information about the user’s environment, including data about one’s physical space (e.g., their home, office, or wherever they are using the IXR devices) and about bystanders, whose personal and biometric data could then be collected without their knowledge (Pahi & Schroeder, 2023).Footnote 15 Just as inferences can be made about individuals based on their online data footprints (Wachter & Mittelstadt, 2019), data about physical locales could be used similarly. While some degree of physical local privacy can be achieved by shutting off IXR devices, during use, platforms and hardware manufacturers can compromise the local privacy of IXR and the local and informational privacy of other individuals.

IXR threatens not only the privacy of users’ virtual and physical spaces but also their relationships in those spaces, implicating group privacy (Floridi, 2017). One’s communications with others could be observed in IXR, but biometric data could facilitate more subtle invasions. Researchers in a Stanford class held in VR used biometric data to infer information about group dynamics and relationships between users (Stanford HAI, 2022). The same could be done by observing an individual’s interactions with bystanders not using IXR. The IXR environment is never fully detached from the physical space within which it is experienced. Any interactions in the physical space, e.g., words exchanged with a co-worker who enters the office, may also be shared in the IXR environment.

5.4 Conclusion

This section has outlined the possible privacy infringements of IXR, expanding beyond the traditional notion of privacy as data protection—although biometric data privacy is a major concern in IXR—to consider decisional and local privacy. IXR opens new avenues for surveillance and persuasion, in the physical and virtual worlds, and our recommendations for addressing them are in Sect. 7.

6 Applicability of Current EU Legislation

Existing EU legislation may mitigate some of the safety and privacy concerns outlined above. We consider the applicability to IXR of six areas of relevant legislation. While other areas of sectoral legislation may apply to specific IXR applications, we focus here on more broadly pertinent legislation based on our analysis of the specific risks inherent to IXR. In the field of consumer protection, we primarily examine the DSA, as it is intended to protect people in the face of technological developments. Other pieces of consumer protection legislation, such as those prohibiting deceptive advertising, unfair commercial practices, and unfair consumer contracts, will also apply to IXR. However, there is not enough evidence that IXR creates enough unique issues in these areas to justify the inclusion of these regulations here (Maciejewski, 2023; Madiega et al., 2022); future work should investigate whether and how they may be implicated.

6.1 Product Safety Legislation

Existing and new product safety legislation will apply to IXR equipment, like headsets. The General Product Safety DirectiveFootnote 16 ensures that products placed on the market are safe and traceable and that consumers are informed of associated risks. A 2021 revision, set to take effect in 2024, updates the Directive to address sales in online marketplaces and the product safety challenges of new technologies, requiring actors to consider the cybersecurity requirements and learning abilities of products when assessing their safety.Footnote 17 The Directive notes that “the development of new technologies might bring new health risks to consumers, such as psychological risk, development risks, in particular for children, mental risks, depression, loss of sleep, or altered brain function” (Recital 21), meaning that the Directive could be interpreted to address the physical, mental, and even some social safety impacts of IXR technology. The 2022 Network and Information Security Directive 2 (NIS 2 Directive) will support this.Footnote 18 When implemented, the NIS 2 Directive will require Member States to include cybersecurity training in their national cybersecurity strategy (Article 7(2)(f)). “Essential and important entities,” which include online marketplaces, search engines, and social media platforms, will have to ensure that network and information systems are secured, and implement and oversee cybersecurity risk management measures (Article 11), helping prevent informational privacy harms related to data breaches.

Regarding other upcoming legislation, a proposalFootnote 19 to revise the 1985 Product Liability Directive would protect user safety by addressing liability for software (including AI systems) and digital services, including those provided by online platforms. Although currently untested, it would allow individuals to claim damages not just for physical injury, but also for “medically [recognized] harm to psychological health,” which could apply to harms caused by IXR platforms. It would also hold software companies responsible for harms caused by the updates (or lack thereof) or learning capabilities of their products (De Luca, 2023). However, while it would eliminate the €500 threshold for claimable property damage, it would not provide a remedy for social harms or damages to mental health that do not rise to the threshold of “medically recognized.” Another piece of relevant upcoming legislation is the AI Liability Directive.Footnote 20 This Directive would protect victims whose privacy or safety has been harmed by AI, which many IXR platforms will likely utilize for content moderation and creation, among other purposes. It would also create rules for accessing evidence to establish damages and relieve claimants from directly proving that the system’s lack of compliance directly caused the damages suffered, which is beneficial given the opaque nature of many AI systems. These measures will ensure that individuals are not harmed further by data exclusion that would impede their case and enable just outcomes when safety has been violated. However, the “information gap” must be addressed so that individuals actually know when they have been harmed by an automated system (Ziosi, 2023).

6.2 ePrivacy Directive

IXR equipment may qualify as “terminal equipment” under the Privacy and Electronic Communications DirectiveFootnote 21 (ePrivacy Directive) because it connects to the Internet (Vale & Berrick, 2023). These devices store biometric and other sensitive information, including metadata automatically generated by users’ interactions with the platform. Article 5 of the ePrivacy Directive requires service providers to maintain the security of services and confidentiality of information and gain explicit consent to store or access information on a device. Consent is not required when this is strictly necessary for the service. Though the ePrivacy directive offers some protection to data stored on IXR devices, it does not cover data once they leave the device. In this case, data could be transmitted to another entity for non-essential (i.e., commercial) purposes, although this is a questionable practice.

These data could also be subject to national data retention legislation, albeit with certain constraints. Article 15 of the ePrivacy Directive allows Member States to derogate from confidentiality requirements and retain data for specific security objectives (e.g., combating serious crime and ensuring national security). The Court of Justice of the European Union (CJEU) ruled that the unfettered retention of metadata, for a limited timeframe, is proportionate only to address genuine and foreseeable threats to national security.Footnote 22 Fighting other serious crimes only justifies retaining data with a specific link to public security threats.Footnote 23 The limits set by this jurisprudence, and the boundaries between the concepts of national and public security, are still subject to discussion (Eskens, 2022; Mitsilegas et al., 2023). Therefore, without clear definitions for valid security threats, it may be challenging to hamper the expansion of surveillance based on IXR data retention,Footnote 24 especially considering the aforementioned possibility of using IXR to facilitate terrorism.

The proposal for an ePrivacy RegulationFootnote 25 would expand privacy rules to electronic communications services such as WhatsApp (and presumably messaging in IXR environments) and guarantee the confidentiality of communications content and metadata (European Commission, 2022). The law would unambiguously cover machine-to-machine communications (Recital 12), protecting the transmission of IXR data generated outside the context of interpersonal communications. Adopting the ePrivacy Regulation would help protect communications and other data from interception, but negotiations are currently deadlocked (Bertuzzi, 2023).

6.3 General Data Protection Regulation

It is unclear how effectively the GDPR will apply to IXR. The GDPR deals with “personal data,” defined as “any information relating to an identified or identifiable natural person” (Article 4(1)). The European Parliament briefing on the metaverse acknowledges that the distinction between a data controller and data processor (Articles 24–28) will become blurred, which raises questions about where to collect user consent (Articles 6–7) and display privacy notices (Articles 12–13), especially if data collection will be “involuntary and continuous” (Madiega et al., 2022).

Additionally, because VR platforms will have users from across the world intermingling in a shared space, the questions of jurisdiction and data transfers become difficult, although adequacy decisions between the EU and third countries can partially solve the data transfer conundrum. Since a privacy law “jurisdiction selection clause” likely would not hold up (Artzt, 2022), this could lead to a powerful Brussels Effect where platforms default to the strongest mandated protections, depending on how specific clauses of the GDPR are interpreted.

Article 6 provides different legal bases for personal data processing, including “consent,” but also the “performance of a contract” (Article 6(1)(b)) and pursuing “legitimate interests” of the controller or a third party, unless “overridden by the interests or fundamental rights or freedoms of the data subject” (Article 6(1)(f)). When applied to targeted advertising, these bases are controversial. The European Data Protection Board (EDPB) ruled that the contract clause cannot be used for such,Footnote 26 prompting Meta to shift to the “legitimate interests” clause. However, TikTok was warned that its “legitimate interests” were not sufficient to justify processing for targeted advertising (Lomas, 2023). If it does end up being used, however, the “legitimate interests” basis requires users to be able to opt out of the processing (Bryant, 2023), providing additional protection to users should it be implemented effectively. Although seemingly a highly legitimate justification for processing, issues have emerged with the GDPR’s consent regime, with consent dialogues often using deceptive presentation of information and fatiguing users with their quantity (Utz et al., 2019). Thus, even when presented with an ostensibly valid consent choice, users could end up consenting to more or different data collection than they intended to.

Processing biometric data “for the purpose of uniquely identifying a natural person,” as well as processing data “revealing racial or ethnic origin, political opinions, religious or philosophical beliefs, or trade union membership” or data regarding one’s health, sex life, or sexual orientation (Article 9(1)), is also banned unless explicit consent has been obtained (Article 9(2)(a)). As previously argued, some biometric data can be aggregated to uniquely identify a person, so this article likely applies. It could protect bystanders from having their data collected and used to create “shadow profiles,” but this should be clarified. Among the exceptions to the processing restrictions, IXR platforms may try to leverage the exception permitting nonconsensual processing of “personal data which are manifestly made public by the data subject” (Article 9(2)(e)). The “manifestly made public” clause has little legislative guidance surrounding it (Dove & Chen, 2021), but high-level guidance requires it to be “construed strictly and as requiring the data subject to deliberately make his or her personal data public” (EU Agency for Fundamental Rights, 2018, 162). However, platforms might argue that an individual occupying a virtual or physical public space while using IXR devices makes at least some of their biometric data public, which could give platforms carte blanche to process it and identify people. While this argument may be reasonable when applied to, say, the appearance of an avatar, extending it to body-based biometric data collected by IXR equipment becomes more concerning. It likely could not be applied to internal biometric measurements like blood pressure or heart rate. Still, platforms could argue that avatar movement data or movement data while using an AR device are public. However, while one’s movements are technically observable in public in the physical world, one does not expect them to be constantly monitored (Schroeder, 2010, 235). Observation at the level of an IXR platform—which could record precisely where a person or avatar goes, who it interacts with, the details of those interactions, and how the user’s body is moving—is an invasion of privacy.

Article 20, which establishes the right to personal data portability, could create a right to interoperability by proxy, allowing users to transfer their personal data from one IXR platform to another. However, this would require new data standards for IXR-specific data. If realized, this could enable users to “vote with their feet” and transfer their data to another IXR platform if they do not like the practices of a given platform. This would not, however, establish standards for digital purchases and NFTs, because although transaction data may qualify as personal data, the digital items themselves likely would not.

Enforcing rules and laws in IXR will significantly impact user safety. Under Article 22, data subjects have the “right not to be subject to a decision based solely on automated processing” which produces “legal effects” or “similarly significantly affects” them (Article 22(1)). This would preclude purely automated content moderation and rule enforcement in IXR, unless subject to explicit consent, a contractual necessity exception, or a “rights and freedoms” exception (which could be quite likely).

One promising ruling for user privacy came in the CJEU’s OT v Vyriausioji tarnybinės etikos komisija,Footnote 27 which found that the processing of personal data that could indirectly reveal “sensitive information concerning a natural person” is subject to the Article 9(1) prohibition on processing when it could identify the person (unless an Article 9(2) exception applies) (Maynard et al., 2022). As aggregated and non-personal data falls outside the GDPR’s purview, this ruling could protect IXR users from having sensitive inferences made about them without their knowledge, although they remain vulnerable to the use of anonymized or synthetic data based on data to mine behavioral insights at a group level (Renieris, 2023, 120). This could then be used to target content, train surveillance technology, or otherwise refine the surveillant physics and extractive behavior of IXR platforms (McStay, 2023), facilitating large-scale invasions of privacy (Renieris, 2023, 84–88).

Other promising rulings include the State Commissioner for Data Protection Lower Saxony’s decision to fine a company €10.4 million for video-monitoring its employees over two years and retaining recordings for up to 60 days at times (LfD Lower Saxony, 2021), and the Deliberação/2021/622 of the Portuguese DPA,Footnote 28 which ruled that using proctoring software to monitor students via browser, camera, and facial analysis violated their privacy rights. This holds promise for curtailing employee and student monitoring because surveillance in IXR could be even more comprehensive than video recordings (Martin, 2022), and would include even more of the biometric analysis that the Portuguese DPA objected to.

6.4 Digital Services Act

The DSA, which entered into force in November of 2022, will impact how IXR platforms deal with illegal content and targeted advertisements. The DSA was intended to create a safer digital sphere, protect fundamental rights, and unify regulation and enforcement. It establishes a “notice and action” system for removing illegal content, with platforms required to establish mechanisms for users to report illegal content (Article 16) and to prioritize responses to “trusted flagger” entities who detect illegal content (Article 22). Additionally, under the “Regulation on addressing the dissemination of terrorist content online” (“Terrorism Regulation”),Footnote 29 terrorist content must be removed within one hour. However, due to the pan-jurisdictional nature of IXR, determining the audience for which content must be removed could be difficult (Hine, 2023).

The DSA bans ads targeted at minors (Article 28) and based on sensitive characteristics (Article 26), although how this applies to inferred characteristics is unclear. Ads should display the advertiser and sponsor in real time and show how the ad is targeted (Article 26). This may be difficult to implement in IXR, where ads may not be static experiences on a sidebar or within a feed. However, if implemented effectively, Article 26, combined with the requirement that “very large online platforms” (VLOPs) and search engines keep a user-accessible repository of ads (Article 39) could help protect user autonomy by informing them about how they are being targeted. It is worth stressing that if information is obscured or users cannot easily access the ad repository, the DSA may face the same problems as the GDPR’s consent regime. The requirement that platforms not impair users’ ability to make “free and informed decisions” through manipulative design (Article 25) is also intended to protect user autonomy. This accompanies Recital 67, which clarifies that this includes “dark patterns.” While the EDPB has issued guidelines on recognizing dark patterns on social media platforms,Footnote 30 they will have to be adapted to account for immersive environments. This could be facilitated by Article 40, which requires that VLOPs allow vetted researchers access to data for research on systemic risks.

While the DSA offers promising protections for users, many of its requirements, including annual systemic risk analysis (Article 34) and independent compliance auditing (Article 37), only apply to VLOPs. This risks creating a regulatory blind spot for “risks disseminated by platforms below the VLOP-threshold” (Laux et al., 2021) of 45 million monthly active EU users, meaning that IXR platforms—none of which currently meet the threshold—could slip through the regulatory cracks and cause significant harm.

6.5 Digital Markets Act

The Digital Markets Act (DMA)Footnote 31 also went into effect in November of 2022, with full compliance expected as of March 2024 (“Digital Markets Act”, 2022). It is intended to manage the power of entrenched, large online “gatekeepers” that provide “core platform services” such as social networks, search engines, operating systems, web browsers, and online advertising (Article 2). In terms of user privacy, Article 5 prevents gatekeepers from non-consensually combining personal data from their core platform service with data from other services or third-party sources and from cross-using personal data from the gatekeeper’s other services (Article 5(2)). This may prevent specific informational privacy harms by hampering platforms from creating larger aggregated datasets and mining them for behavioral insights. The European Commission will have auditing powers to ensure compliance (Article 23). However, the DMA faces the same scope issues as the DSA as it only applies to large companies (Article 3(2)), meaning that smaller platforms could still combine and cross-use data in concerning ways.

Other provisions prevent IXR platforms from prioritizing their own events over those of creators on the platform (which Horizon Worlds users report Meta is currently doing (Peters, 2023)). There are also messaging, software, and hardware interoperability requirements (Articles 6–7), and IXR hardware providers will have to allow third-party app stores on their devices (Article 6(4)). However, as these issues are not directly relevant to user safety and privacy, we leave their fuller exploration to future work.

6.6 Artificial Intelligence Act

The AI Act will impact how AI systems can be used in IXR since the definition of an “AI system” includes those that influence “virtual environments” (Article 3). According to the final text,Footnote 32 platforms will be obligated to notify users when they are interacting with AI systems (including Artificial Avatars (Petrányi et al., 2023)), and synthetic content must be labeled in machine-readable format and disclosed (Article 50(2), (4)), which could reduce deception and manipulation. Notification requirements also apply to emotion recognition and biometric categorization systems (Article 50(3)), which is beneficial for transparency but would not address the potential exploitative purposes of those systems. Furthermore, this would not apply to systems “permitted by law to detect, prevent, investigate, and prosecute criminal offences” (Article 50(3)), which opens a potential loophole for platforms working with law enforcement.

While the banning of real-time biometric identification in public spaces was much debated, in the end, it was implemented only for law enforcement—not for private companies—with exceptions so broad they could become the rule (Article 5(1)(h), (2)). Additionally, this ban would only apply to physical spaces (Recital 19). Therefore, while AR devices could not generally be used for real-time identification in physical spaces by law enforcement, biometric identification could still be done in virtual environments with similar effects. However, other clauses may offer more protection in IXR. Emotion recognition systems will be banned in places of education and work, which could provide a foundation to ban them in corresponding IXR environments (Article 5(1)(f)). Moreover, profiling individuals based on biometric data to infer protected characteristics will be banned, which could limit profiling using IXR biometric data, although it does not apply to “lawfully acquired biometric datasets” in law enforcement (Article 5(1)(g)). Biometric categorization based on “sensitive or protected attributes or characteristics based on the inference of those attributes or characteristics” is permitted as a high-risk system (Annex III(1)(b)), raising the question of what kinds of systems will actually be banned. Recital 16 sheds some light on the issue, saying that the AI Act is not concerned with “biometric categorization systems that are a purely ancillary feature intrinsically linked to another commercial service, meaning that the feature cannot, for objective technical reasons, be used without the principal service.” This would include filters that categorize facial or body features on online marketplaces by allowing consumers to preview the display of the product on themselves and make purchase decisions, or filters used on online social network services that categorize facial or body features to allow users to add or modify pictures or videos, which may thus include categorization to enable AR features.

The AI Act could be relevant to other uses of AI in IXR. Systems that could subliminally distort human behavior in ways that cause “significant harm” will be banned (Article 5(1)(a))). This could offer protection from manipulation in IXR, but it depends on how “significant harm” is interpreted. While emotion recognition in workplaces and educational institutions will be banned (Article 5(1)(f)), AI systems that monitor worker performance (Annex III(4)(b)) and student behavior during assessments (Annex III(3)(d)) are permitted, albeit classified as “high-risk.” All permissible biometrics-based systems are also high-risk (Annex III(1)), and scraping facial images from the Internet or CCTV footage for facial recognition databases will be banned (Article 5(1)(e)), but scraping non-facial biometric data or using an AR device to capture data in real-time would be permitted.

In the next section, we make recommendations to EU legislators to address some of the regulatory gaps noted and to anticipate future challenges posed by IXR.

7 Recommendations to the EU Legislator

This section details our recommendations to EU legislators based on the previous risk and legal analyses. These recommendations are not exhaustive, but only a contribution to the ongoing debate, and when possible, we refer to other sources that make similar suggestions. Not all identified risks merit new hard law, as locking in regulations centered on specific provisions may be detrimental. Some could be implemented by creating or modifying primary law, some could be based on secondary legislation, and some could involve regulatory or judicial interpretations. Others do not specifically relate to hard or soft law, such as funding research initiatives, but all will fill gaps in existing policy to uphold safety and privacy in IXR. Within each area of safety or privacy, recommendations for new policies are first, followed by proposals to modify current legislation, and finally regulatory and non-legislative recommendations.

7.1 Physical Safety

  1. 1.

    Legal requirements should be introduced to provide users with easy access to safety tools that allow them to:

    1. a.

      Mute other users and/or blur their avatars to mitigate verbal and physical harassment.

    2. b.

      Prevent other avatars from entering their personal space.

    3. c.

      Quickly enter an out-of-world “safe zone” where they cannot be followed, seen, or interacted with (like the Horizon Worlds “Safe Zone”), and re-enter the world at a previewable location of their choosing. Note that this would not entail invisibility, but entering a location removed from the virtual world.

    4. d.

      Report malicious behavior by other users. Report patterns should be monitored to ensure that spam or vindictive reports are not used to target individuals or communities.

  2. 2.

    Access to “digital twins” that may provide intelligence for groups planning attacks should be restricted by law and/or their owners (World Economic Forum, 2022).

  3. 3.

    Assault and battery laws should be clarified, or new laws enacted, to explicitly cover virtual attacks where no physical contact takes place.

  4. 4.

    The Terrorism Regulation should be clarified, by amendment or judicial interpretation, to establish that immersive environments where terrorist or extremist groups gather, or ones constructed to promote their ideology, are also subject to removal and reporting to authorities.

  5. 5.

    In concert with IXR platforms, EU Member States should create initiatives or augment existing ones to promote safe drinking habits surrounding IXR, especially targeting “nightlife” areas.

7.2 Mental Safety

  1. 1.

    To protect user autonomy, new legislation should be introduced to require the following:

    1. a.

      Artificial Avatars should be prominently labeled (“PwC Digital Ethics for Responsible Metaverse”, 2022) whenever they could be feasibly confused with a human-powered avatar, and users should be able to request, easily and effectively, that such avatars immediately cease contact with them. In situations such as games where players expect to encounter Artificial Avatars, a disclosure mechanism may be suitable. Legislation requiring this will build on the DSA’s disclosure requirements by empowering users to avoid potentially manipulative Artificial Avatar behavior.

    2. b.

      Surroundings that may appear different from what other users see should include a visual disclaimer to that effect.

    3. c.

      Platforms should be prohibited from undertaking any research or experimentation aimed at manipulating users’ emotional or mental states unless such studies explicitly and informatively recruit participants, with appropriate, ethical, legal, and human subject research measures and informed consent.

    4. d.

      To maintain user awareness of how content in IXR can differ from person to person, users should be able to view the perspective of another user, with that user’s permission, to see what that person’s IXR and their own presence in it look like. Differences (i.e., in user-targeted content) should be highlighted on request. Content deemed illegal based on a user’s jurisdiction should still not be visible in this mode.

    5. e.

      Platforms should be prohibited from accepting payments from users displaying signs of compulsive buying behavior that could be linked to Internet or gaming addiction (Granero et al., 2016), nor should they engage in manipulative promotion of goods or services.

    6. f.

      To help prevent addiction, VR platforms should be required to encourage users to take a break after some threshold time has been reached. This threshold time should be further studied. Every 30 min, as enforced by the Stanford VR class (Stanford HAI, 2022), may be excessively paternalistic, but every hour (as advocated by some Horizon Worlds moderators (Hill, 2022)) could be a reasonable starting threshold. Empirical reporting suggests that it is easy to lose track of time in VR (Hill, 2022), so these nudges should also indicate the current time.

  2. 2.

    Until research establishes what risks IXR may pose to children, platforms and hardware manufacturers should be required to establish mechanisms to prevent children under 13 from access. That age is a conventional standard set by an American data protection law from 1998 (Canales, 2022), but it could be an effective starting point if recommended enforcement measures are operationalized. This is technically the minimum age for using some devices, although Meta lowered the minimum age for its Quest headsets to 10 years old (Duffy, 2023), but regardless, age minimums are not enforced (Hill, 2022). Additionally, IXR experiences should be able to set age restrictions for entry, and any with explicitly adult content should not allow access to children.

    1. a.

      Age verification should happen at both the account and device level to prevent a young person from using an adult’s account. Account-level verification could involve credit card or state-issued ID verification; for example, in line with how Google interprets the Audiovisual Media Services DirectiveFootnote 33 for access to age-restricted content (“Access Age-Restricted Content & Features,” n.d.). Instagram has been testing face scans to verify age (Malik, 2023), which can be effective; the contractor claims that its model’s mean absolute error is 1.4 years for ages 6–12 and 1.5 years for ages 13–17, although some gender and skin tone discrepancies remain (Yoti, 2023). If the model cannot determine the user’s age confidently, photo identification could be requested as a fallback (and should also be a primary option for those who do not want to provide biometrics); trusted third parties and/or zero-knowledge proof techniques could assist with making this more secure. This accords with recommendations from an International Telecommunication Union (ITU) working group on data protection (ITU Focus Group on Metaverse FGMV-12, 2023). Device-level verification could involve facial or retinal authentication via the device, with scans encrypted and stored only locally on the device, similar to how Apple stores Face ID data (“Face ID & Privacy,” n.d.). Additional research should explore verification measures for those without state-issued IDs. All scans and images used for age verification should be deleted promptly by all parties involved after verification is complete.

  3. 3.

    People of age should be able to register on IXR platforms without providing identification (although this may still require a photograph for age verification, and anyone suspected of being under 13 must prove their age; this is necessary to balance privacy with children’s protection). Avatars using a real person’s name should be able to request free identity verification and furnish proof of identification corresponding to the name of the avatar, or that the avatar’s name is a plausible alias also used in the physical world. Verified avatars should be labeled as such. Avatars representing brands should also be able to request verification that they work with that brand, similar to how Twitter’s legacy brand verification worked (“Legacy Verification Policy,” n.d.). VR spaces should be able to limit access to verified avatars.

  4. 4.

    To prevent the propagation of deceptive clones, near-clones, and deepfakes in IXR, legislation or judicial interpretation should clarify that the right to one’s image extends to an avatar and virtual environments.

  5. 5.

    Due to their unique risks, provisions in the DSA and DMA on advertising, dark patterns, data processing, and portability should be expanded to all IXR platforms regardless of size.

  6. 6.

    Advertisement archives, as laid out in Article 39 of the DSA, should be required to include information on exactly where and how an ad was displayed or performed (in the case of an Artificial Avatar promotion) in an IXR environment.

  7. 7.

    Targeted transitive and subliminal advertising (e.g., transforming all beverages into a specific soft drink or ads on passing cars) should be prohibited because of the potential for violating user autonomy and the impossibility of effective user interaction with the disclosures required by the DSA. This could be clarified in the AI Act (Franklin et al., 2022). Non-targeted ads of this nature may be permitted due to their similarity to mass campaigns and sponsored events in the physical world, but the user must have easy access to the general ad archive as well as to a summary of what ads they were presented with and other information that would be included in the DSA archive.

  8. 8.

    How intellectual property protections and property law apply to IXR should be clarified, by legislative amendment or judicial interpretation, to prevent IP theft and loss of purchased digital items (Maciejewski, 2023).

  9. 9.

    EU Member States should fund research into the long-term and addictive effects of IXR, especially how they may differentially impact children and marginalized groups.

7.3 Social Stability

  1. 1.

    Standards of accessibility for hardware and software should be created and enforced by legislation like the Web Accessibility DirectiveFootnote 34 to ensure that individuals with physical and cognitive disabilities can access IXR.

  2. 2.

    Competent authorities under the DSA should require platforms to institute effective automated content moderation systems. Details of these systems should be clearly communicated to users, even if contractual necessity is used to justify the use of automation rather than user consent (Article 22 GDPR), as should sanctions for violating content policies. We acknowledge that automated content moderation has not always proved sufficiently flexible and pluralistic in its implementation, especially for marginalized communities (Oliva et al., 2021). Thus, when a user is sanctioned, a full explanation of how their content violated a specific policy should be provided, as should an appeals mechanism that allows for human review of their case.

  3. 3.

    EU Member States should fund initiatives to research harassment in IXR and digital and IXR literacy campaigns to help users understand the potential risks and benefits of IXR. Ensuring that users are informed will help guard against new scams and other risks to safety.

  4. 4.

    Member States should be aware that, due to the pan-jurisdictional nature of IXR, platforms may face pressure from governments—both within and outside the EU—to ban some forms of content and expression in ways that conflict with expressed EU values. For instance, governments that restrict LGBTQ+ expression could pressure IXR platforms to censor LGBTQ+ content (Hine, 2023). Member States should be prepared for possible political pressure on EU-based platforms and governments.

7.4 Informational Privacy

  1. 1.

    Biometric data should only be used in real-time for the functionality and refinement of IXR experiences and therapeutic or research-related experiences with explicit consent and ethical-legal approval. They should never be retained by platforms, hardware providers, employers, or schools, even in anonymized or aggregate form. Users in IXR experiences should be able to opt out of biometric data use (except for what is strictly necessary for functionality, e.g., for avatar motion) and have a similar experience. This will go beyond the provisions in the AI Act to prevent the construction of “kinematic fingerprints” that could be used for autonomy-violating content or ad targeting (Spiegel, 2018) and the construction of aggregated or anonymized datasets that could be used to mine group-level behavioral insights (Renieris, 2023). Biometric data also should not be used to make inferences about other characteristics of a user, including about their affective states or cognitive processes, regardless of whether those characteristics are protected. This should be mandated by new legislation or revision to the GDPR.

  2. 2.

    Scraping of any form of biometric data, as well as the nonconsensual collection and aggregation of biometric data using AR devices, should be banned by legislation.

  3. 3.

    The GDPR should be comprehensively analyzed to determine if the data processor/controller distinction is still fit for purpose (Martin, 2022). If specific IXR legislation is adopted, it should clarify the allocation of data protection responsibilities between platforms, hardware providers, and advertisers. In the case of joint controllership, legal arrangements explaining responsibility allocation should be made mandatory (cf. Article 26 GDPR). Upon user request, IXR platforms should display a point of contact to exercise data protection rights.

  4. 4.

    Under EU data protection law, competent authorities should require IXR platforms that allow users to record to ensure that avatars whose users did not explicitly consent to being recorded are blurred or otherwise anonymized when the video is exported. A clear indication should be displayed when a user is recording in VR, and on AR devices in the physical world.

  5. 5.

    Competent authorities under EU data protection law should require device providers to inform users about exactly what biometric data their IXR devices can collect in an understandable format on first use or upon terms’ modification and reminded at least annually. IXR platforms and experience providers should provide users similar information about what data the specific IXR experience collects.

7.5 Decisional Privacy

  1. 1.

    The EDPB should clarify what dark patterns look like in IXR, and a mechanism for reporting them should be established.

7.6 Local Privacy

  1. 1.

    Gathering bystander data and creating “shadow profiles” containing data about individuals in the vicinity of IXR users should be prohibited by primary law or judicial interpretation.

  2. 2.

    VR users should always have access to a private space, whether a home-like environment or a “lobby,” where they can turn off recording by individuals and the platform, but platforms should develop alternative behavior reporting mechanisms that do not rely on video evidence to protect these spaces. Legislation should clarify the distinction between public and private spaces, and IXR providers should remind users where their actions are subject to monitoring, recording, and/or analysis.

  3. 3.

    Clauses of the AI Act that deal with AI in physical spaces should be expanded to include virtual spaces.

8 Conclusion

IXR offers great potential to augment the physical world and open up new experiences, but its accompanying risks must be addressed. In this article, we have outlined the risks to safety and privacy in IXR and offered policy suggestions for EU legislators. Some of these risks already exist in the physical or digital worlds, but IXR could exacerbate them, while others are novel. Many will disproportionately impact marginalized and disabled users, who should receive particular consideration. We do not presume to have covered all risks, but we hope our proposed policies may provide a flexible basis to address emergent risks.

Governance of IXR will require harmonization because it involves companies and users from across the globe. Part of this effort may involve the consideration of new human rights, which could go as far as to consider the expansion of personality rights to avatars and the right to “mental self-determination” (Michael & Metzinger, 2016); the rights to experiential authenticity, emotional privacy, and behavioral privacy (Rosenberg, 2022a); “neurorights” to physical and mental integrity and the protection of brain activity and related data, as enshrined in Chile’s new constitution (McCay, 2022). The feasibility and necessity of some of these proposals have been questioned (Bublitz, 2022). Some have instead suggested an expansive conception of human rights to challenge the datafication of our physical and virtual worlds (Renieris, 2023) or a broader interpretation of the right to freedom of thought (Hertz, 2022) to protect mental self-determination. Regardless of the ultimate approach, some tensions are inevitable when contemplating how to safeguard fundamental rights, such as trade-offs between safety and freedom of expression or privacy. Good regulation will have to carefully consider how to balance conflicting rights. We hope this work will support global, cross-sectoral discussions, in industry, academia, government, civil society, and other sectors. Our new extended reality depends on it.