Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3544548.3581376acmconferencesArticle/Chapter ViewFull TextPublication PageschiConference Proceedingsconference-collections
research-article
Open access

Algorithmic Power or Punishment: Information Worker Perspectives on Passive Sensing Enabled AI Phenotyping of Performance and Wellbeing

Published: 19 April 2023 Publication History

Abstract

We are witnessing an emergence in Passive Sensing enabled AI (PSAI) to provide dynamic insights for performance and wellbeing of information workers. Hybrid work paradigms have simultaneously created new opportunities for PSAI, but have also fostered anxieties of misuse and privacy intrusions within a power asymmetry. At this juncture, it is unclear if those who are sensed can find these systems acceptable. We conducted scenario-based interviews of 28 information workers to highlight their perspectives as data subjects in PSAI. We unpack their expectations using the Contextual Integrity framework of privacy and information gathering. Participants described appropriateness of PSAI based on its impact on job consequences, work-life boundaries, and preservation of flexibility. They perceived that PSAI inferences could be shared with selected stakeholders if they could negotiate the algorithmic inferences. Our findings help envision worker-centric approaches to implementing PSAI as an empowering tool in the future of work.

1 Introduction

A better understanding of our effectiveness at work can enable us to improve our daily experiences and meet our goals. Organizations also have a vested interest in understanding worker effectiveness, as it impacts their bottom line [80]. Therefore, It is not a surprise that, in the face of the economic downturn of 2022, the CEO of Alphabet Inc. felt that employees needed to be more productive in the post-COVID-19 era [9], and the CEO of BlackRock claimed that a return to onsite work could increase productivity [47]. One emerging approach to quantify effectiveness has been to passively sense worker behaviors and enable algorithms that can give workers insights to improve their work quality, experience, and coordination. These approaches use digital data to determine an individual’s observable characteristics and expressions, or their “phenotype”, and are thus referred to as Digital Phenotyping [55]. In the context of this paper, we focus on Digital Phenotyping that goes beyond statistical measurement and incorporates increasingly complex machine learning to estimate individual’s behavioral wellbeing [133]. We refer to such technological approaches as Passive Sensing enabled AI (PSAI1) to represent both data source and method. These technologies promise a means to provide objective and precise insights for both performance and wellbeing [33, 94]. Admittedly, there are some positive outcomes of adopting PSAI. In concept, it stands to remove implicit biases in the workplace and illuminates many overlooked factors [13, 94]. That said, scholars and labor advocates are also noting problematic uses of PSAI that could harm a worker because workers exist in a power asymmetry, where they may be disenfranchised [125]. We need to take a closer look at how workers perceive PSAI in order to make this technology work for our workers, as opposed to making them work for the technology.
Recent research has indicated the anxieties among data subjects whose data are used for making algorithmic inferences for purposes of work (e.g., modeling past experiences to predict success) [108]. These concerns range from potentially exacerbated discrimination and compromised privacy expectations [68]. Moreover, tensions between supervision and surveillance at the workplace have been well documented [12, 19, 76]. Aloisi and Gramano rightly noted in their work that “Artificial Intelligence is watching you at work,” given the emergent new practices of individual-level profiling, organizing, and monitoring made possible by AI. Yet, AI seems to turn a “blind” eye to these grave problems because worker surveillance is almost entirely unregulated and opaque [51]. Algorithmic estimates such as those promised by PSAI are moving up the income ladder into spaces like information work [13, 63]. Information work, which primarily involves processing information — a less fungible task than, say, manufacturing — is notorious for having nebulous indicators of effectiveness, and by corollary, success [72]. This ambiguity presents an opportunity for PSAI systems that can not only model work activities (e.g., application time use, mobile distractions, work synchronization) but also model non–work correlates (e.g., sleep and movement) [11, 67, 92, 105, 122, 126]. Simplistically, the information flow of PSAI begins with behavioral data captured from the subject, which is then modeled by AI to produce inferences. In reality, such information flows are likely to be complicated by factors like how the data is collected, whom the inferences are shared with, and for what purpose.
Without understanding how the data subjects, the Information Workers (IWs), perceive such arguably forceful deployments of PSAI, risks exacerbating the power asymmetry. As of 2020, the number of Information Workers IWs was estimated to be around a billion people across the globe [113]. IWs are increasingly demanding remote work, but it also raises more concerns about PSAI intruding on personal privacy [7, 74]. Unfortunately, some organizations are only letting IWs work remote if they use PSAI, forcing them to relinquish their privacy [13, 63]. Adding to this concern, many commercially available instances of PSAI for work are not designed for self–reflection or self–management and thus expect IWs to become data subjects of an obscure information flow has unclear benefits for their own growth, instead presumably solely caters to the employer’s interests [2, 53]. According to the Contextual Integrity framework by Nissenbaum, to protect the interest of the data subject, new information flows must follow new informational norms [102]. This paper aims to explain those norms for using PSAI for information work through two questions:
Norms of Appropriateness: What is the suitability of PSAI within IWs’ expectations of algorithmic inferences of performance & wellbeing?
Norms of Distribution: When is it reasonable to share PSAI’s inferences of an IW with other stakeholders?
We conducted scenario–based interviews with 28 IWs to highlight their perspectives on using PSAI to algorithmically phenotype their performance and wellbeing. Understandably these flows have other stakeholders, but we chose to focus on the workers as the data–subjects’ voices are often missing from discourses around PSAI [49]. We found that IWs envisioned powerful uses of PSAI but were aware of privacy intrusions and misappropriations. On the surface, this might appear as another paradox, but the contrasting perspectives of supervision and surveillance can inform each other [109]. This study extends recent literature in Human–Computer Interaction (HCI) and Computer–Supported Cooperative Work (CSCW) that has critiqued algorithmic Human Resource Management (HRM) [4, 108]. We describe the norms for PSAI as guidelines for better information flows and improved regulation.

2 Background and Motivation

A precursor to AI–based inference of workers used to be rudimentary monitoring. Foucault’s pivotal critique of the prison system has inspired critiques on the regulation of life using different social, cultural, political, or — as in our case — technological devices [44]. This Foucauldian lens has been used to argue that monitoring workers fundamentally extends social control and reinforces the existing power hierarchy [19, 76]. However, this perspective is not necessarily a dead–end to innovation of technologies, policies, or reform. In her reflection on Foucault, Lacombe argues that devices for social control can be conceived to “maximize life” in a way that can both constrain and enable [73]. We believe these dichotomous attributes of the Foucauldian lens are represented by Passive Sensing enabled AI. Hence, we study PSAI for information work and aim to reform design, inform implementation, and guide regulation.

2.1 Tracking Workers

Tracking has been historically entrenched in work. Henry Ford used a stopwatch to track the efficiency of workers in his factory [127]. Similarly, at most jobs, a worker is supervised by a manager to ensure workers execute their assigned tasks. The etymology of the words “supervise” and “surveillance” both loosely translate to “oversee”. Unsurprisingly, the public conversations and scientific literature on worker supervision, monitoring, or tracking have gone hand in hand with discussions on surveillance. Also note that different scientific communities use the words supervision, monitoring, and tracking instead of “surveillance” as it is the only word that bears “dystopian baggage” [12]. In reality, all these words have both coercive and caring implications, despite their connotations [125]. Therefore, we cannot investigate one facet of literature without the other. To find a way forward, our study follows Sewell and Barker’s stance where “Acknowledging ambiguity and paradox allows a dialogue to develop between the two research communities.” [125]
Over time, Ford’s stopwatch evolved to eliminate the human overseer from the visibility of the workers. We can trace this evolution from thumb scans, through closed-circuit television (CCTV), to different localization technologies [6]. Organizational scholars have critiqued that this expansion and intensification of tracking technologies is akin to Bentham’s “panopticon” [14], where the many are monitored by the few [88]. Since human oversight is itself a form of labor, augmenting such a panopticon can be favorable for capital resources in an organization [93]. However, it comes at the cost of taking social control away from the worker. For example, UPS saw a net rise in efficiency when they used GPS technology to track drivers, but at the cost of drivers struggling mentally and physically [65]. Today, the monitoring of workers is not only limited to recording performance but also related to wellbeing (e.g., incentive programs based on health trackers) [87]. Again, this form of tracking can still be routed back to the organizations’ needs to maintain their resources [52]. Although such tracking can be politically maleficent (for surveillance), [5] described a contrasting stance driven by a philosophical motivation to model reality and create new ontologies to explain workers (for capture) [5]. For instance, ubiquitous computing was founded on research in workplace tracking technologies such as Active Badge location systems designed to improve the service flow in a workspace (e.g., routing a phone call to where a worker is) [137, 138]. Since then, Ubiquitous Computing, Human–Computer interaction, and Computer–Supported Cooperative Work have investigated a variety of applications for tracking workers to provide better services [11, 67, 92, 105, 122, 126]. These innovations have coincided with quantified self movement that has enabled individuals to digitally measure many aspects of daily living [69]. Generally, this movement describes consented self-tracking that can be both empowering but also detrimental [88]. Outside of work, such technologies are a source of personal informatics and are becoming commonplace in everyday life [81].
At work, these tracking technologies are situated within the relationship shared between the worker and their organization. This relationship is underscored by a power asymmetry. Here power refers to the “ability of a person to withhold rewards from and apply sanctions to others” [16]. This has also been described as the “bargaining power” to dictate the terms and conditions of an employment contract [104]. The asymmetry refers to one party having a greater ability on the other to determine the employment contract than the other [28]. Simply put, the organization can evaluate a worker and determine how they work in the future, but a worker cannot demand the same from their employer. Power asymmetry inherently exacerbates itself as those with less power are likely to relinquish the power they have. It also is salient to many job sectors, including information work, and has a cyclical influence on tracking because of information asymmetry [58]. When one party has more information than the other, it can lead to exploitative practices that further the power asymmetry [124]. In theory, advancements in tracking can improve the insights available to both workers and organizations. Naturally, we ponder if adopting work tracking in practice can actually resist, or even reduce, the asymmetries at work and the anxieties of a panopticon. Recent work has proposed that the quantified self might serve as a “heautopticon” or a form of empowering self-surveillance [39]. However, it is unclear how these possibilities apply to information work. To bridge this gap, in our study, we bring to light the perspectives of workers on adopting these technologies as reflective tools for themselves.
Existing models of understanding, such as Agre’s surveillance–capture models, were intended to rationalize motivations of the social actors in a particular system, such as workers, employers, and the developers of such systems. To clarify what is reasonable for information workers in hybrid work, these models need to be reassessed based on the larger social context [5]. The norms associated with tracking workers in manufacturing roles or logistics cannot be transposed to this working population without a context-specific investigation. Our paper aims to clarify those norms for development of these PSAI in information work.

2.2 Algorithmic Phenotyping of Information Work with Passive Sensing

In the past decade, research in Ubiquitous Computing and HCI broadly, and digital health, in particular, has coined, used, and critiqued the term “digital phenotyping” [85, 106, 129]. It is the idea of moment-by-moment quantification of the individual-level human phenotype in situ, using data from personal digital devices [59]. We build upon this research to estimate worker effectiveness, which describes a broad set of important outcomes for worker prosperity [131]. Traditionally, for many kinds of work, Human Resource Management (HRM) was merely concerned with productivity which is indicated by the efficiency of output. However, this notion is limited in the context of information work where “doing the right things” is as important as “doing the things right [41]. Therefore, a more general aspect of this effectiveness is job performance, which Rotundo and Sackett describe as controllable behaviors that contribute to the organization [116]. The other aspect of effectiveness that is gaining popularity in phenotyping is wellbeing to inform sustainable and satisfying work experience [25]. We have witnessed an emergence in technologies that use passive sensing for phenotyping human behavior. This approach has several advantages over traditional methods of clarifying human behavior because it can now be studied in naturalistic settings [26].
Arguably, HRM has always involved some kind of passive measurement. In the early 20th Century, Taylor coined the idea of Scientific Management to improve worker efficiency based on the idea that “the prosperity for the employer cannot exist through a long term of years unless it is accompanied by prosperity for the employee and vice versa” [132]. Taylor’s Scientific Management went on to inspire several psychological assessments to phenotype worker effectiveness to improve HRM [121]. In turn, these assessments triggered the development of several digital monitoring systems today. Note that many digital monitoring systems have been set up for protection or security of organizational assets [123]. Certain digital monitoring is also normalized within work, such as monitoring emails [77]. While these approaches need constant critique and redesign to be acceptable, our paper is focused on the use of PSAI for HRM. In practice, this kind of application of AI in HRM is typically justified by the economic theory of mutual obligation [61]. According to this theory, workers need to meet certain goals based on their employment contracts, and employers need to ensure they satisfy those goals. As a result, today we see organizations for information work using PSAI to learn about the worker. Some literature even refers to this style of HRM as Nudge Management [42], but in popular media, these technologies are often referred to as people analytics [84]. Regardless of terminology, this kind of algorithmic inference for HRM cannot be treated as a single monolith that would draw the same kind of reception irrespective of how it is implemented. We aim to describe the socio–technical aspects of information work that support and resist the adoption of PSAI.
The shift to hybrid work has made many organizations adopt different forms of passive monitoring [7]. However, not all of these approaches are for algorithmic inference. For instance, Time Doctor provides accurate measures of billable time by constantly streaming a video of an IW’s screen and webcam [40]. PSAI systems leverage similar streams of data, but they not only record events but model IW behaviors to infer their effectiveness using AI and machine learning. Commercial technologies like Viva Insights [56] and Humanize are examples of PSAI that provide insights on worker wellbeing and performance by modeling abstracted data from work applications. Academic research in this space has many more examples of PSAI technologies for work. Before we elaborate on these, it is important to acknowledge that many of these studies did not leverage PSAI for prediction or inference but for explanation of underlying social phenomena [34, 90]. Other studies have used passively sensed data to support work in-the-moment, such as by informing an IW when they should take a break [67]. Having said that, these studies can still inspire predictive systems for personal tracking or HRM to, arguably, improve worker effectiveness [89]. Therefore, we reflect on all kinds of scholarly literature on algorithmic inference for daily activities of IWs, and we refer to these approaches as “algorithmic phenotyping” due to their emphasis on inference beyond simply gathering digital data.
One way to scope PSAI would be to limit it to the work context. For example, modeling email activity to estimate an IW’s effectiveness [92]. Similarly, AI could model conversation metrics in virtual meetings to provide insight into the quality of meetings [142]. Even devices embedded in a work environment could be harnessed by PSAI to infer worker experiences, such as using acoustic sensing [60] and proximity sensing [32]. Prior work shows opportunities for PSAI to harness devices like WiFi routers ([31, 37, 50]) and smartcard readers([46]) to understand behaviors related to wellbeing. Similarly, a digital infrastructure that has been leveraged is workplace social media [126]. In contrast to the work–specific scoping, the Social-Ecological Model [20] motivates research on PSAI that expands beyond the workplace as the outcomes of work can be a result of many different factors. For instance, wearables have been used to model the physical fitness and sleep hygiene of workers for inferences [43, 112]. PSAI has leveraged personal devices, such as wearables, to infer an IW’s cognitive load [122]. Research has also shown the value of modeling behaviors such as commuting which are related to but physically distinct from the work context [98]. Furthermore, we have evidence that broader multimodal sensor deployments have also shown promise in classifying worker performance [96, 117]. Also, PSAI could use social media as a potential source to understand an IW’s wellbeing [35, 86, 118]. Looking back at these technologies amid the pandemic, Das Swain et al. envisioned possible future implementations of PSAI with cautionary implications, even if these are designed for the worker [33]. What this body of work does not clarify is what IWs themselves envision. Through our findings, we amplify their perspective. In turn, we provide further direction to the development of PSAI as personal informatics solutions and re–contextualize them as both enablers and impediments of today’s IW.
Figure 1:
Figure 1: Distribution of participant attitudes. Higher values represent more acceptance. (a) Public surveillance: Opinion on expansion of surveillance to reduce crime and offences with a 3–item scale, (b)Personal tracking: Experience with wearables, location tracking, social networks, etc. using a 5 − −item scale.

3 Methods

In this study, we take a worker-centered approach to inquiring PSAI. In an institutional setting of information work, a worker is only one of the many different stakeholders. Studies on Human–Data Interaction describe data as common objects for all stakeholders to interact with [27, 128]. However, especially in this case, the data is not created by all stakeholders equally, nor are its implications uniform. When technology is designed without the benefits of the data subject, we risk worsening the power asymmetry [88]. One method to tackle this growing asymmetry is by designing for the data subject as a primary beneficiary of a system that leverages their data [62]. Thus, we focus on the IW’s perspective and investigate how they envision adopting PSAI in the future, if at all.

3.1 Participants & Recruitment

We recruited 28 IWs working in the U.S. and interviewed them between April and May 2022. We used both online and digital advertisements to recruit participants. To scope our study to information work, we screened to ensure interested individuals had “work experience that involved cognitively demanding tasks to meet information-oriented goals, e.g., programming, marketing, engineering, accounting, management, etc.”. Participants were required to have at least 2 years of work experience. We also required participants to have some experience working on–site so that they could consider PSAI in light of both the traditional and emerging work context. Our participants represented a variety of roles, including engineers, developers, analysts, and accountants. Participants prominently described their occupational sector as Information technology (IT), but our sample also reflected views from areas such as finance, consulting, manufacturing, healthcare, and libraries. 12 participants identified as female, 15 as male, and 1 preferred not to say. 17 participants were younger than 30 years old at the time of interviews. Participants completed a survey to report their attitudes toward public surveillance and personal tracking (adapted from [1]). Figure 1 shows that our participants leaned toward expanding public tracking (to reduce crime) and had a diverse set of experiences with technologies that track them in their personal life. Each participant was compensated with a giftcard worth $20 at the end of the interview. Table 1 provides a lookup summary of each participant along with their study identifier. Note we did not explicitly analyze participants by the categories in Table 1. Inspired by similar studies [115], we have included these for epistemological accountability and to express the scope of our study.
Table 1:
IDGenderAgeRaceRoleSector
P1Male21-29AsianResearch EngineerIT
P2Male30-39AsianDeveloperFinance/IT
P3Male21-29AsianAnalystFinance/IT
P4Male30-39AsianData EngineerIT
P5Female21-29WhiteProduct ManagerInsurance/IT
P6Male21-29AsianData AnalystInsurance
P7Female21-29AsianConsultantConsulting
P8Male30-39AsianUX DeveloperIT
P9Male21-29AsianResearch AssistantManufacturing
P10Male21-29AsianAccountantVenture Capital
P11Female21-29WhiteScientistGovernment
P12Female30-39WhiteDeveloperIT
P13Male21-29AsianAccount ManagementRetail
P14Female21-29WhiteTechnical ServiceLibrary
P15Female21-29Black or African AmericanProject ManagerResearch
P16-30-39-Team ManagerHealthcare
P17Male21-29AsianProduct Manager-
P18Female21-29WhiteProduct ManagerConsulting
P19Female21-29Black or African AmericanRecruiterEducation
P20Female30-39WhiteResearcherHealth
P21Male21-29Black or African AmericanEngineerIT
P22Male30-39WhiteSoftware DeveloperIT
P23Male21-29WhiteFinancial plannerConsumer Goods
P24Female30-39WhiteCustomer ServiceIT
P25Male21-29WhiteBusiness AnalystIT
P26Female30-39WhiteConsultantMarketing
P27Female40-49WhiteDirectorIT/Sales
P28Male30-39WhitePortfolio ManagerFinance
Table 1: Participants summary by gender, age, race, as well as their role and occupational sector. Fields marked ’-’ indicates that characteristics the participant chose to not report.
Table 2:
LabelDescriptionAdapted FromReference
Sys 1Uses CCTV cameras to observe different activities in a workspace. Analyzes physical activities to measure your performance. The HR will receive a report of your performance.CCTV[23]
Sys 2Records the webcam feed of your PC. Analyzes your presence, expressions, and surroundings to measure your performance. Your manager will receive a report of your performance.RemoteDesk[111]
Sys 3Captures screenshots of your PC activity at regular intervals. Analyzes PC activity to measure your performance. Your manager will receive a report of your performance.Interguard[57]
Sys 4Uses custom sensor hardware to measure occupancy in different spaces at work. Analyzes the physical space use to measure performance. The HR will receive a aggregated report of workforce performance in different spaces.FM Systems and Freespace[45, 130]
Sys 5Logs data from organizational communication (e.g., email, slack, or, calendar) and infrastructural systems (e.g., WiFi, Bluetooth, or access cards), Analyzes digital and physical activities to measure the organization’s performance and wellbeing. The HR will receive an aggregated report of workforce performance and wellbeing.Humanyze[53]
Sys 6Logs the time you spend on PC applications and web sites. Analyzes digital activity to measure your performance. Your manager will receive a report of your performance.ActivTrak[2]
Sys 7Logs the time you spend on work applications (editing, communicating and scheduling). Analyzes work-related PC activities to measure your performance and wellbeing. You will receive a report of your performance and wellbeing.Viva Insights and My Analytics[10, 56]
Table 2: We designed PSAI scenarios based on contemporary technology. We refer to these in our findings via the labels here.

3.2 Interview Protocol

Recruited participants consented to participate in one-on-one semi–structured interviews. All interviews were conducted by the first author and included one other author as an observer. Interviews started with open-ended questions to understand the approaches participants’ organizations were using to evaluate their performance and wellbeing. Then, we provided a definition of PSAI rooted in personal tracking and an overview of its potential in the work context [69]. This was followed by a scenario–based comparison exercise to elicit rich perspectives on PSAI for workers.
We acknowledge that situating potential data subjects in actual behavioral contexts can help anticipate real behaviors. However, implementing a multitude of PSAI systems and conducting field studies can be impractical. By contrast, leveraging scenarios that describe emergent use–cases can anticipate actual behaviors in new socio–technical settings [140]. This technique has been used in passive sensing to rapidly evaluate new application designs [38, 54] and understand privacy perspectives towards such technologies [82, 83, 100]. This approach has also made its way to studies on Human–AI interaction [78, 139]. Park et al. used this method to understand perspectives on general applications of algorithmic HRM [108]. Given our aim is to highlight norms, scenarios can be a powerful approach as “presenting users with scenarios that push social boundaries helps to uncover where these boundaries actually lie” [38]. As shown in Table 2, the PSAI scenarios we presented in our study were adapted from real systems for HRM. Each scenario outlined the information flow of the PSAI system — (i) how data is sensed, (ii) what inferences AI produces from the data, and (iii) how the inferences can be distributed. To improve elicitation, we showed participants two randomly selected pairs of scenarios. This approach was inspired by psychology literature that shows comparisons help rationalize underlying features of an artifact by associations and contrasts [134]. The comparison of scenarios was not aimed at rating PSAI for HRM but only to initiate reflection. We also showed a third pair as a combination of already shown scenarios for additional rigor and clarity. To elicit perspectives, for every pair, we asked participants which scenarios they would resist to consent and which scenarios they would find useful. Note the scenarios were only starting points, and participants were free to reimagine PSAI as they described their preferences. For example, in certain sessions, participants only liked some aspects of a system but had problems with others. They had the flexibility of rethinking the scenarios, and interviews continued with new emergent scenarios with our original scenarios only as reference. As such, the aim of the scenarios was not to show participants an exhaustive set of systems but rather to provide a probe to help them appreciate the range of possibilities.
Interviews were conducted over Zoom and each session was recorded for transcription. We removed any mention of the participants’ employer or other identities of coworkers from transcriptions. Participants were informed that turning on the camera was optional. Interviews lasted between 40 minutes to 1 hour. Our study was approved by the authors’ Institutional Review Board (IRB).

3.3 Data Analysis

We compiled all the transcripts and performed thematic analysis to synthesize patterns from the participants’ perspectives [17]. Every transcript was carefully read and open–coded by the first author and at least one other author. Throughout this process, we iterated the codes by meeting regularly to reconcile existing codes and identify new ones. After the codebook was completed, we performed affinity mapping to interpret and organize the initial codes into higher–level themes. This resulted in a three-level thematic structure. At the highest abstraction, our themes summarized IW perceptions of PSAI in terms of its effectiveness, concerns, applications for personal utility, and applications for shared utility. Given our aim to describe the norms of passive sensing, we reoriented and refined our themes as per the Contextual Integrity framework [102].

3.3.1 Contextual Integrity of Sensing at the Workplace.

One of the classical approaches to evaluate privacy for passive sensing is to evaluate it by proportionality to existing activities [75]. In information work, project management tools such as JIRA are already used to disclose an IW’s work activity to others on the project [107]. An IW might want to disclose their state of wellbeing to their manager to negotiate work breaks. In theory, this can be a compelling crutch to justify PSAI at work. Yet, it remains an open question if the algorithmic phenotyping of PSAI introduces uncertain imaginaries that cannot be reconciled by existing work practices. According to Nissenbaum’s framework of Contextual Integrity, user preferences for tracking systems are limited when privacy is considered intrinsic to the actors, spaces, or nature of information [102]. Instead, the adoption of systems must be studied by understanding the role of that information within the context of the user.
The contextual integrity framework is becoming increasingly significant to evaluate reasonable implementations of sensing technologies. Nicholas et al., have illuminated attitudes toward personal sensing in the health context [101]. Similarly, contextual integrity has been used to explain adoption of tracking systems for public health [135]. Closer to our scope, a recent study by Adler et al. described the norms of information flow for quantifying the stress of physicians in the workplace to respond to burnout [4]. Interestingly, in their context, workers felt that sharing information with a supervisor could be more valuable than self-reflection, as supervisors had actual power to make changes to assuage their stress. Studies indicate that workers are willing to adopt ambient technologies if they enhance their wellbeing [114] and location tracking when it improves work efficiency [1]. Can we transfer these expectations to the algorithmic inferences provided by PSAI? Information work provides a unique setting (Section 2.2). This motivates us to interpret our themes from an analytical lens that reconciles the expectations of emerging technologies in specific settings. We know that Contextual Integrity is upheld when the following information norms are maintained; (i) Norms of Appropriateness and (ii) Norms of Flow/Distribution [102]. As a result, we synthesize and scrutinize our findings based on these norms.

3.3.2 Reflexive Considerations.

We describe our positionality as a way to situate our values that shaped this research. Three authors have conducted research in the past combining machine learning with passively collected data for digital phenotyping to support mental wellbeing. The authors have, however, no stake, financial, personal, professional, or other, in any of the technologies used to inspire scenarios (Table 2). However, their research advances the technologies like PSAI through novel methodologies as well as human–centered evaluations. Two authors also have experience working in typical information work organizations. In light of this, we consider ourselves “insiders” because this perspective critiques technology motivated by our own research and appraises our own sociotechnical reality as IWs. Our identities and experiences as researchers also help us construct meaning from our data and conceptualize our findings [15]. Broadly, this paper is influenced by our interactions with privacy researchers, organizational psychology researchers, IWs, and other data subjects of digital phenotyping. We borrow Chancellor et al.’s term to describe ourselves as “critical insiders” [22]. We are in a unique position to bridge disparate views and approaches on the future of work by pursuing a worker–centered approach.

4 Findings

Our inquiry on IW perspectives illuminated considerations for the value and concerns of PSAI at work. These attitudes were underpinned by existing work dynamics and expectations, which made the adoption of PSAI systems at work distinct from those in personal life. When participants reflected on their use of personal tracking technologies (for fitness, sleep, and screen use), they were motivated by “benchmarking” (P28), “hitting goals” (P6), and “tracking progress” (P24, P28). Overall, these motivations aligned with visions of PSAI at work to provide insights for self–efficacy and care. A key concern of using PSAI for personal tracking was data being used for advertising, but this was perceived as a necessary transaction (“I try to function in reality” - P28). However, information work presents a unique context for using PSAI, with its distinct considerations. P14 articulated the overarching tensions that complicate the adoption of PSAI at work, “On the personalized Fitbit, I am paying them to give me the insights. My request for that information outweighs my sensitivity for it [versus] personalized insights on technology driven by a company that is paying me to do work.” Therefore, adoption of PSAI can be disincentivized by anticipated information flows and the existing power structures. Through this section of the paper, we elaborate on how IW’s imagine the role PSAI can play in their work within effects of these power dynamics.

4.1 Norms of Appropriateness: The suitability of PSAI within IWs’ expectations of algorithmic inferences of performance & wellbeing

According to Sappington, the gap between actual worker behaviors and organizational perspective of workers describes an inevitably incomplete social contract that gives workers discretion but also limits organizational feedback [120]. This incompleteness can largely explain the motivation of PSAI at work [34, 94, 118]. Our participants echoed the opportunities for PSAI–like interventions for their benefit, but they were also wary of the implementation of data collection and implications of inferences. Existing assessments of performance ignored the IWs’ process, focusing just on outcome-based “statistics” (P14) that provided a “limited data view” (P14). Instead, our participants had a more nuanced perception of their performance that could be reflected in work phenomena — such as break-taking (P11), task-switching (P3), and availability demands (P19) — and non-work phenomena — such as their expressions (P24), sleep (P3, P7, P9), and physical activities (P13). Although wellbeing evaluations were not common, organization did provide resources (e.g., seminars or subscriptions). The main complaint against these was the lack of individualized actionable information, which made IW’s feel their mental wellbeing was not actually valued nor was it important to the organization (P24). P3 exclaimed the missing link to be “actual rubber to the road metrics, reaction and solution.” Understandably, PSAI has promising potential given that it is automatic, continuous, and unobtrusive. However, efficacy in developing personal mindfulness does not sufficiently explain appropriateness in the information work context. This section describes how IW perceptions of appropriate PSAI were embedded in their attitude toward information work.

4.1.1 Effect on Job Consequence.

PSAI systems provide indicators, which might be considered orthogonal to work-specific tasks (e.g., your performance was moderate or stress was high). On the one hand, participants found value in leveraging these insights to contextualize their experience with evidence and champion change. On the other hand, participants were anxious that these insights could be misappropriated to their own detriment.
“You’ve had these goals, you’ve had hit these hurdles and setbacks. If you put that report in the context of this performance evaluation, I think together they’re going to really have a significant impact on your own professional and personal development.” - P6
The insights generated from PSAI can be empowering to IWs as it helps contextualize their work experience. P25 imagined such systems to support IW needs, “I think with the data, it would at least help you sit down at the table, so to speak.” Traditionally, workplace evaluations favor ends over means. As P2 puts it, “the work which is getting done is what is counted, but how we achieve it is never logged in anywhere.” P5, like P6 (quoted above), believed that PSAI insights could complement existing evaluations after seeing Sys 5. Reflecting on one of her past evaluations, P15 claimed that PSAI could have been useful as a reference (“let me give some numbers.”) It can give IWs a deeper understanding of their work patterns, present opportunities for learning effective work practices, and enable them to negotiate changes. P22 envisioned using Sys 7 to request time off for their wellbeing, “He can look into this report and it would be some kind of objective.” Alternatively, P17 believed this data would be more persuasive to reorganize his work expectations, “Having data that would support, I need a virtual assistant or we need to hire another PM or it’s not feasible for me to run this many projects and run this team at the same time.” P14 thought PSAI could support her in highlighting her role to others higher up in the organization. She said, “This would end up benefiting us more because it would help others see how much we actually do and change the current stigma.” Even at an aggregate level, it can help employers reflect on their organizational health. For instance, P25 found this a suitable approach for “the company to be aware of work–life balance.” Therefore, IWs find value in such systems when they can incorporate its insights into demonstrable change such as professional development or negotiation for wellbeing.
“Realistically, there is that concern that they’re going to look at this big promotion and they’re going to say, ’I don’t know if he’s going to cut it’ ” - P3
The existing power asymmetry of information work environments always engenders concerns about privacy and subsequent misappropriation of their passively sensed data. Antithetical to the empowering aspects PSAI, P3 was concerned that his employer could tap into these insights to stifle their career, even going on to call one PSAI system “destructive.” P1 had a more straightforward concern, “If my workday performance and how I work was released, it might affect how much I get paid.” Participants like P4 were anxious about the uncertain consequences of other stakeholders using this data. These concerns stemmed from the perceived lack of control over one’s data in the organizational context. “I download the app, the information is captured and then it goes to someone else. That’s the objection.” (P27). Furthermore, implementing PSAI like Sys 1 and Sys 4 — which are embedded in the physical infrastructure — can create an austere situation where IWs might feel that their choice to consent could affect their employment (P5, P11). In fact, some participants felt that the very decision to use such systems for deeper surveillance could reflect an organizations’ own values (P12). Eventually, such uncontrolled and imbalanced deployment of PSAI can detract IWs from choosing to work in such companies. However, even that choice is a function of job precarity in that sector. As a result, development of PSAI systems needs to be aware of the socio-economic conditions of employment.

4.1.2 Respecting Work–Life Boundary Management.

Workers’ preferences for work–life boundary management reflected their perceived control of privacy in PSAI systems but also highlighted their expected value from the system. Post–COVID-19 pandemic, new emerging work practices are allowing many IWs to work remotely either in their entirety or on certain days of the week. Moreover, given the ubiquity of personal laptops and mobile phones, it is commonplace to bring some work home. Although we know non-work can influence work experiences, some workers found sensing beyond work invasive and irrelevant to improving work. Yet, some workers also believed that sensing non-work could be less consequential to their jobs and more holistic for reflection.
“If I’m going to the office, I will probably agree to do that. But if I work from home [...] I don’t want that to record anything in my home that’s maybe not work-related. ” - P8
Different workers have different approaches to their work–life. Some demarcate the segmentation between the two using physical aspects. A common understanding is segmenting work–life based on the space the IW finds themselves in. In the quote above, P8 was willing to consent to PSAI if it is contained to their workspace. With more interleaving work–life practices, space is not the only indicator of work–life separation. Organizations often provide workers with work–specific devices or enforce a logical separation between work & personal profiles. For example, P5 thought Sys 5, which logs applications and browsing, was reasonable because she did not do personal activities on her work machine anyway. Although this might be to ensure security of organizational data, it also provides another method for IWs to segment work–life. P17 noted that he was willing to allow PSAI systems on his work device, “But if it’s a personal device and I’m doing work on, absolutely not.” P1 said Sys 2 was a “violation of personal space” because the webcam could capture their home environment. Understanding these constraints can help describe the limits within which privacy can be preserved. It is also worth noting the concern of some participants who believed that preserving the work–life boundary for PSAI made it more useful (P13, P16, P21). Similarly, on different occasions, both P9 and P10 stated the focus on the work context was more “accurate.”“If we can achieve only tracking the work applications that will definitely improve the efficiency and avoid a lot of other privacy arguments, if there’s any there,” said P8. Thus, for certain IWs, the work context is not only more private but can actually be more useful.
“Maybe on a Fitbit watch or something wearable rather than my computer itself, because I don’t like people seeing what I’m doing on this computer” - P7
P7 presented an alternative viewpoint that shows work–only restriction of PSAI can elicit concerns about job consequences (Section 4.1.1). In fact, depending on what kind of data is being sensed, an IW might consider the privacy of their work activities to outweigh that of activities outside of it. Devices distinct from the work context can be considered more reasonable for sensing. P1 even described a greater willingness to accept a PSAI system provided by a third–party because of the apprehension that something provided by the organization can be misappropriated by HR. Again, the shifting of sensing to non-work devices and concepts is not only determined or shaped by privacy decisions, but P18 found other work–specific PSAI to be limited in “the world of working from home.” P6 felt that PSAI like Sys 4 could be more valuable. He said, “It would give me a true reflection of of how I work, it would give me a true performance evaluation report that I can actually make use of.” Thus, PSAI systems that model phenomena outside work could provide the opportunity for an IW to interrelate all aspects of their life and improve as a totality.

4.1.3 Preservation of Flexibility.

Choosing where to work is not the only freedom IWs have in determining their work styles. IWs often enjoy a broader sense of flexibility where they are rewarded and evaluated for outcomes. Unlike other forms of labor, an IW is not as heavily scrutinized on time–tracking. “Brain work” is often hard to quantify, and therefore workers can approach work tasks at their own rhythm. We found that our participants suspected this flexibility could be hindered with PSAI systems, even if their employment was unaffected or their work–life boundary was secure.
“I’m very flexible in how I work. I like to get things done on my own time. Sometimes that means I carry over on the weekend. And sometimes that means I just do work nine to five. I’d rather just keep that on my own and how we get things done rather than having some kind of tracking.” - P1
Among others, P1 felt that PSAI inferences might be reductive in quantifying varying work styles. In reference to IWs that work in bursts or “sprints,” P9 said, “it can it can adversely affect people who do not progress in a linear manner.” Similarly, P20 anticipated a simple case of regimenting where PSAI’s inferences would force her to work specific hours instead of simply being judged on her output. For IWs like her who work from home, these systems could disrupt how they choose to interleave their work–home responsibilities. P18 noted that Sys 1 could penalize her behaviors that do not look like work but actually are, such as when they “do laps around the office” or “lay on a beanbag chair.” P7 believed this to be the case when PSAI was limited to “just PC stuff,” such as Sys 3 and Sys 6. More generally, P28 believed that PSAI confined to work applications reinforces an “older view” of work that thinks “you’ve got to be in a place to be able to do a job.” With more IWs opting into remote or hybrid work options, these systems can be considered regressive. However, expanding sensing might not be the solution either. IWs like P1 found that tracking ecological factors like movement and space might not generalize to “varying situations” and could render false negatives. Meanwhile, P6 preferred work–specific tracking over ecological ones because “I could skew the data in favor of my performance being better than it actually is” (referring to screenshots taken by Sys 3). As a result, it would free him to work as he likes. These findings indicate that the very presence of PSAI could establish expectations of a rigid work style and discourage pluralistic approaches to work.
“It could become dehumanizing. It could be become a little bit robotic, [where] in a way I can only perform at seven percent today.” - P19
The threat to flexibility posed by PSAI can not only restrict activities but also lead to worker distress. In the quote above, P19 alluded to feeling further commoditized because algorithmic inferences tend to convert nuanced, complex human experiences into streams of numbers. P3 described enrolling into such a system as “a little intimidating” and feeling like a part of a “cold, hard, big institution.” These impressions were likely due to perspectives of PSAI as a tool for reducing the complexity of a worker’s experience into performance metrics. P10 felt that “continuously monitoring for what you do [...] could affect your job more” and P20 even thought it could be “distraction and counterproductive.” Similarly, P5 described that “I don’t like being overanalyzed […] I would be less likely to produce good work.” Other participants like P6 and P11 also alluded to the fact that PSAI systems can exacerbate the Hawthorne effect caused by supervision [3]. It could even lead to negative consequences to an IW’s affect. P24 was concerned that the continuous monitoring required by PSAI could be “stress [her] more” and P10 felt it would “put a lot of pressure on [him].” These perspectives mostly arose from discussions on performance measures and not so much on wellbeing inferences. However, to keep up with a camera-based system like Sys 1, P8 felt they would need to compromise their wellbeing by reducing breaks and socializing at work. Even when a worker might not lose sensitive data, the mere presence of these systems can impact their work effectiveness. This risk is ironic for systems that aim to improve worker wellbeing & performance.

4.2 Norms of Distribution: Reasonableness of sharing inferences from PSAI with other stakeholders

Information work is inherently collaborative in nature. Collective knowledge supports IWs in their day-to-day and during important career junctures, such as evaluations and promotions. Our participants explained that in their current settings coworkers were “disconnected” from others’ challenges (P21), and the lack of awareness of each other’s state led to disruption in work (P26). On the contrary, IWs needed to personally check on each other’s wellbeing (P5, P14) and felt that hybrid work was diminishing their ability to maintain this practice (P25). Some PSAI systems could potentially smooth out organizational workflows by pooling behavioral patterns [32, 141]. Arguably, such existing practices would present possibilities for IWs to share estimates from PSAI within their work network. In this section, we describe the different paradigms that motivate an IW to share and the conditions within which they think sharing should transpire to protect their interests.

4.2.1 Paradigms for Sharing.

Sharing knowledge in an IW’s workplace is essential for seamless communication and information flows. PSAI systems might develop insights on a worker passively, but how and where that information is distributed needs to be a deliberate process. Here, we describe the network of stakeholders within which a particular IW might want to share the insights provided by PSAI.
“Sometimes people don’t know how to manage the stress they have at work. I’ve seen that with a couple of people. They don’t know how to escalate that or to communicate that up. You know, and sometimes supervisor doesn’t know because they’ve never been able to see it.” – P25
A common form of distribution described by our participants was the value of one-one sharing for personal improvement. Others echoed P25 in using PSAI to better explain their work context to their managers (e.g., Sys 2, Sys 3, and Sys 6). This perspective was often described as analogous to existing workplace practices, i.e., “I work with them” (P15), “they know more of your day-to-day” (P7), and “understand the way that thought processes work or deep thinking happens” (P18). What is apparent to be important in this sharing flow is that the manager should be viewed as a stakeholder with real expertise or valid opinion on an IW’s work behavior. Participants suggested alternative experts as well, such as senior collaborators (P7) or advisors and mentors (P6). Sharing PSAI inferences could also be seen as necessary because viewing them in isolation could cause harm. About Sys 7, P20 pointed out that she “would have to use it with some sort of guide or some sort of other coach in order to use it in a kind way rather than in a punitive way.” On inspecting P20’s sentiment further, it was clear that her preference was not for validating estimates but rather for “someone to tell me that what is happening is normal in this moment or common.” Note, however, that these one–one flows still reflect the power asymmetry of information work as, in most of these cases an IW is required to share information with those above them in the hierarchy. Refreshingly enough, we also found some perspectives that go against these expectations. Both P16 and P28 were willing to share their PSAI insights with those they mentor. Thus, a key component in this paradigm is the presence of stakeholders who have the know-how to reappraise the inferences produced by PSAI and, in turn generate more holistic feedback for IWs.
“I know if I hear that our team is doing well, but I know I’m not up to standard and what I should be meeting in terms of the team... that actually pushes me a little bit more” - P13
Another sharing metaphor that emerged from our findings was to share information for comparisons and coordination in a many-to-many fashion. The commonly stated purpose was to share PSAI insights to regulate their activities in accordance with the coworkers they aspire to resemble. Akin to P13, P22 felt that seeing others could help them aspire for better work processes. Of course, these many–to–many transactions need to be mutual as P28 said I’d be willing to share my price in order to get that back.” P10 also wanted to be compared but clarified that it did not have to “be a specific number” suggesting abstract methods of comparison. P7 wanted to view aggregated insights for “different types of roles like people that are managers or [...] how much are the analysts doing.” Thus this kind of aggregated benchmarking against other peers can also help an IW identify “normalized” patterns and improve social awareness within organizations. This could be cathartic but could also nudge an IW to understand that certain challenges might be a function of a poor workplace and are thus less “mutable.” Aside from self–regulation, some participants believed this form of sharing could help redistribute workload. P7 envisioned sharing PSAI estimates of her stress with coworkers so that, “they can at least take over some of the easier work.” To complement this P25 wanted to know how the people he was working with were feeling and support them during junctures of low wellbeing (“it beats them up”). Similarly, P18 thought this kind of sharing could help manage work in her team when one of her coworkers was “having a hard time personally.” An extension of the many–many paradigm discussed earlier is to share data completely anonymously for gross aggregation (e.g., Sys 4 and Sys 5). While P4 claimed the lack of specificity in such a paradigm would make their privacy more protected, P20 and P26 believed it could actually serve an altruistic purpose. She said, “I would certainly consent to that if my individual data were to be consolidated with others because I think that there would be a purpose to that.” Although this might not directly benefit the IW, who is a data subject, this could lead to eventual collective benefit, such as cultural change in the concerned organization. Therefore, sharing within a finite network of coworkers can help an IW make more sense of the inferences they receive and leverage the support of coworkers.

4.2.2 Conditions for Sharing.

In information work, some information about an IW is constantly available for all coworkers to see. It can be momentary information such as availability or work–specific information such as which documents they worked on or when they committed their code last. In regards to this kind of information, an IW’s state can be visible to others unconditionally. However, with PSAI insights, our participants preferred more intentional sharing. Particularly, IWs referred to their ability to negotiate, perception of stakeholder roles, and accountability of the system. This section expands on the factors that can inform flows of sharing.
“It feels a little vulnerable to just then send the metrics off to somebody without having a chance to add my own interpretation, just leaving it up to their interpretation.” - P15
In the quote above, P15 wanted to assimilate and communicate her understanding of the algorithmic insights before passing it over to another stakeholder. Similarly, P11 said, “I feel like I have got some time to adjust and then sharing would make me feel more comfortable.” P7 remarked that she would share the insights if “it needs to be escalated,” a common way of describing work issues in information work that need more attention. “I would feel better, I would feel more in control,” said P14 when reflecting on the possibility of personally appraising the PSAI insights first. Without this agency, an IW might feel over–scrutinized. P20 even exclaimed that this kind of sharing can be seen as “manipulative”, but that social discretion allows her to fulfill any necessary disclosures to her supervisor. Essentially, IWs need some room to negotiate the algorithmic inferences of PSAI, before they can be distributed any further. Thus, the control that IWs seek is not only limited to whom the data is shared with but how and when it is shared.
“This is something I can learn, adapt and improve myself and also talk to my manager so we can work together to get better. But if HR sees something then I am not sure how he or she will respond.” - P8
The ability to negotiate the PSAI insights does not necessitate that newly generated information can be shared with anyone. Participants had varying attitudes towards different stakeholders in the information flow because of anxieties that the insights could be used against them. Several participants shared P8’s view that their manager is preferred over the HR as a receiver in the information flow (P7, P15, P19, P22). Ironically, we found strong resistance to HR being involved in systems designed for HRM. The responses to these situations were plain and clear, “I don’t want HR to be measuring anything” (P17). HR as an entity seemed to foster a negative connotation, described as a “bad word” (17), “scary” (P23), or “ominous” (P15). Both P13 and P24 anticipated they would be worried about what HR could interpret from PSAI or question them for. Aside from the social connotation, P22 thought HR was too “far removed” from the work context, while P23 thought this was not a part of HR’s role. Alternatively, not only was a manager more relevant to an IW’s functioning, but certain participants also noted that it was actually their manager’s primary function to improve their work experience. P13 even went on to say that they would rather have PSAI directly send insights to managers, “I don’t really care about how well I’m doing in terms of performance, I feel like that’s more of a important measure for my manager.” P4 and P10 described that managers could use these measures to coordinate work. By contrast, other participants expressed greater concern because of managers potentially micromanaging. “I also like the level of separation from my manager.,” said P5, who would prefer the insights be shared with Human Resources (HR). Similarly, P9 mentioned the trust deficit between him and his manager made it challenging to foresee favorable outcomes of sharing PSAI insights. Interestingly enough, P4 thought that HR should be the ones that educate managers on good practices for using PSAI. Therefore, the perception of the functional roles of different stakeholders can determine an IWs willingness to distribute their PSAI insights.
“Even though System X appears to capture more of what I’m actually doing. I’m personally willing to fork over that if I understand the details of it.” - P28
Against the backdrop of the above anxieties, IWs like P28 expressed a need to understand how the entire PSAI information flow was set up. An improved understanding can help an IW anticipate the consequences of misappropriating these insights. P5 called out for greater disclosure and transparency, “Some way to describe the limitations of the system would make me more comfortable with a system.” In fact, some participants noted their indifference to PSAI that they consider ineffective, and therefore invaluable in the information flow (e.g., both P1 and P7 thought that space usage was a poor measure). Besides the mechanics of PSAI’s collection and inference, P9 and P20 called for explicit disclosure of the stakeholders who could access these insights. P1 was skeptical about who manufactures the PSAI, “If they were using a product promoted by Apple to do this, I’m going to be more OK because I know this data is not going to go back to HR.” P11 urged that the flow of information needs to be established within organizational policy, “I think there has to be an agreement of purpose or expectations.” For instance, participants with smaller teams were concerned that many–to–many sharing can lead to negative consequences for their job role or employment (P1, P2, P21, P22). Participants also valued awareness of who else was sharing or how many people were being aggregated. As P16 noted, “given that the culture was such that everyone was having an openness to the material and they felt comfortable with it and it made sense to everybody.” Therefore, IWs tend to expect clear notice and guidance on the scope of distribution in the PSAI system.

5 Discussion

By inquiring about the perspectives of the IWs themselves, our study brings attention to the norms within which PSAI needs to be implemented to simultaneously promote IW’s own performance and wellbeing, while also protecting them from misappropriation of algorithmic inferences. Many PSAI systems available commercially are top–down and provide little insight to IWs themselves [2, 45, 53, 111]. However, reimagining these systems as tools for personal reflection and quantification was not sufficient to assuage all the concerns. While IWs felt that personalized insights from PSAI could help them thrive at work, they also believed that specific implementations not only intruded on their privacy but risked negatively impacting their work experience (Section 4.1). Similarly, IWs expressed possibilities for sharing insights from PSAI with certain coworkers but only under circumstances that mitigate any misrepresentation of their behaviors (Section 4.2). We delve deeper into the specifics of the norms of appropriateness and norms of distribution to paint a vision where IWs can use PSAI without needing to compromise their dignity.

5.1 Guidelines and Directions for Worker-centric PSAI

Our findings exemplify the paradox between supervision and surveillance that has been discussed in organizational sociology [5, 12, 125]. As critical insiders to the development of these technologies, we treat this paradox as a constraint that propels better solutions for the future of workplace evaluations. Contextual integrity highlights opportunities through which PSAI can mitigate but also aggravate the power asymmetry at work. A fundamental aspect of power asymmetry is information asymmetry [58], and PSAI is designed to produce new information. In the context of information asymmetry we reflect that the norms of appropriateness found in our work broadly represent what new information could be generated (Section 4.1). In the same vein, the norms of distribution describe how the information is used within the asymmetry (Section 4.2). Through this section, we aim to inspire socio–technical changes and reflection on how information flows involving PSAI are deployed at work. We envision that these changes need to be centered not only on the development of these technologies but also on structured workplace policies and cultural reforms that encourage a different relationship between workers and their behavioral data.

5.1.1 Align Work–Life Boundary Preferences.

The normalization of remote work has made IWs ambiguate which devices are considered work and nonwork, such as the mobile phone [36]. PSAI leverages such devices to model behaviors. In our sample, IWs had distinct preferences for the scope of PSAI vis-a-vis their work–life boundary. This preference was based not only on privacy preferences but also utility. As we see in Section 4.1.2, one stance was that modeling behaviors outside the work–context was intrusive and irrelevant (e.g., sleeping, physical activity, mobile use). The other stance subscribed to the idea of the Social-Ecological-model, which describes individual actions are the result of a multitude of intersecting factors that lie beyond the individual in their ecological contexts [20]. Yet, this must not be confused with using holistic sensing as an excuse for unchecked sensing. Very much in the spirit of Nissenbaum’s Contextual Integrity framework [102], we need to recognize that IWs have different preferences on how they combine and contrast their private and professional lives. For instance, some workers choose to disclose more of their personal situation to their managers than others who prefer to keep it separate from work (Section 4.1.2). Another way to view this dichotomy is by recognizing different strategies to adjust to the asymmetries at work. More information can lead to more power [124]. IWs who prefer segmentation in PSAI might want to ensure the organization does not get any more power. IWs who want complementary information from PSAI might want to increase the power they have. Arguably, it is challenging to reconcile both perspectives and arrive at a universal PSAI template. Having said that, designers of these systems must be sensitive to these individual preferences when trying to solicit consent and provide notice. Describing the technology on these lines can support more informed decision–making for adoption.

5.1.2 Accommodate Pluralistic Models of Effectiveness.

Information work allows workers to approach their work the way they like [120]. Certain IWs anticipated that algorithmic inferences produced by PSAI would take away that discretion. They suspected that such technology would measure all IWs by the same yardstick, and that would be unfair to their unique approach to work (Section 4.1.3). This concern could stem from the inability of IWs to determine their own evaluation criteria, which is common in asymmetrical power structures [104]. Hence, IWs resign to accepting that PSAI will also impose—or be used to impose— rigid terms. In fact, contemporary research shows that interventions to provide information workers more agency of their time can help them deal with job demands better [30]. The domain of organizational psychology already has some precedent for this pluralistic notion. Research on PSAI grounded in such work has incorporated measures beyond task proficiency, like organizational citizenship, to define performance [34, 94]. However, research on PSAI has also shown that algorithmic inferences can be semantically disconnected from what workers actually perceive when it comes to predicting abstract constructs like wellbeing [29]. The shift to hybrid work has further compelled the need for a more diverse view of effectiveness that might even include domestic activities [24]. Therefore, PSAI systems need to work more intimately with an IW to retrain themselves based on the data subject’s uniqueness but also infer insights that speak towards their goals.

5.1.3 Setup Affordances for Human Reappraisal.

Generally speaking, participants did not want to share the raw streams of data collected by PSAI, but they still acknowledged that in certain circumstances, the estimates output by PSAI needed to be distributed. For example, they might want feedback from coworkers or want to compare themselves (Section 4.2). The critical challenge in such flows is to protect the IW from being misrepresented by algorithmic estimates. Arguably, PSAI can produce evaluations automatically and at a higher frequency than traditional organizational methods, but these evaluations need to be complemented with human expertise and perspective. For instance, IWs want to only share their PSAI evaluations after they have had a chance to process and add context (Section 4.2.2). Recent work shows that IWs might have a very different understanding of their behaviors than what can be measured by PSAI [29, 66]. Conversely, IWs might not be able to interpret personalized insights or conceive actionable changes without the support of their managers or mentors (Section 4.2.1). Both these use cases represent the need for human–in–the–loop information flows that encourage reappraisal by data–subjects and experts (Section 4.2.1). Whom a worker considers an expert might vary from person to person. Rawls believed that impartial experts and mutual accountability could form a social contract that can legitimize social control that is being proposed by PSAI [110]. Since HR typically has a contentious reputation among IWs, organizations might need to appoint specialized officers for this role, such as the up-and-coming wellness officers, although it is yet to be seen how the impressions of these officers develop over time. Furthermore, the stakeholders in these flows need to be held accountable and build trust with the data subject [71]. The alteration of stakeholders from the flows may not completely remedy the power asymmetry. However, our findings indicate that IWs recognized that changes to the distribution of insights could protect them from further worsening their state. Therefore, these flows need to exist in plain sight for the IWs to understand, augment, and redirect.

5.1.4 Design for Worker Undersight.

Some IWs described that PSAI could be useful to them as complementary information in performance evaluations, to improve awareness of their role, and to justify resource requests (Section 4.1.1). Given the nature of the data it captured, PSAI was viewed as empirical evidence that could drive changes in an IW’s professional state but also in their overall organization. If an IW was overworked, they could convince their manager to rearrange work distribution or give them a day off. Even more so, PSAI insights could be used to bargain better for pay. Typically, in an asymmetrical power structure, the workers have inferior bargaining power [104]. Thus, PSAI must be conceived to maximize the bargaining power each IW has. To empower an IW with such technology, we need to design beyond purposes of nudges and reflection [18] and design for collective bargaining [99]. Data–driven bargaining has its basis in traditional methods for Human Resource Management, such as timekeeping [70]. As workers become more conscious of themselves due to increasing perceived or actual technological surveillance, they have the potential to contest claims by their employer. Ideally, such PSAI technologies must be accessible to IWs independent of their employer and independent of the PSAI their organization may have already deployed. If the outputs of PSAI technologies are only limited to actions IWs should do (e.g., “you seem very stressed, take a break”), they might not be able to accumulate enough knowledge of what they have been doing (e.g., “you have been overworked for 60 days, please consult your manager”). Future PSAI technologies need to make inferences that are reproducible and support sensemaking. Note however, isolated individual understandings can be limited in an asymmetrical power structure [18, 99]. Instead, pooling of information also fit within the norms of distribution, as cumulatively sharing PSAI can help them gain better perspective on the algorithmic estimates (Section 4.2.1). Research and activism on both crowdwork and gig–work have proposed to arm workers with data on their work to combat asymmetry [18, 48]. From our findings, the right iteration of PSAI could serve this purpose for IWs and help them build their own conceptions of these algorithmic estimates. Taking a leaf from studies in crowd–work [119], the next step would be to pursue research on collective platforms to leverage behavioral data for workplace bargaining.

5.2 Who Monitors the Monitoring?

We presented our scenarios to participants as PSAI technologies built by third–parties. However, our scenarios were adopted from technologies marketed for organizational — not personal — consumption [53, 57, 111] rather than direct consumption by the worker. The power asymmetry at work makes these information flows further opaque. One of the exceptions to this was Viva Insights [56] which at least allows some joint–initiative — the organization might need to purchase or subscribe to the service, but each individual IW gets the discretion to use the technology. Yet, these cases do not entirely alleviate IW’s anxieties of privacy intrusion as we know from health trackers in wellbeing incentive programs [87]. Our interviews showed the risk of data transactions between developers and organizations still looms over the head of IWs (Section 4.2.2). Even if developers and organizations follow better practices (Section 5.1), the lack of accountability raises many important questions for the practical deployment of PSAI as empowering technology. Therefore, we bring to attention the role of other stakeholders to provide checks and balances; (i) Regulators who can react to deployments of PSAI, and (ii) Researchers stakeholders who can preempt future PSAI. Through this section, we add more perspective to ignite further conversation.

5.2.1 Role of Regulators.

The need for improved legislation on worker surveillance is not new [103], but the urgency at which it needs to be revised needs to match the rapid development (and deployment) of AI technologies [21]. Ajunwa et al. have proposed an “Employee Privacy Protection Act” (EPPA) to limit data harnessed by technologies like PSAI to the work context [6]. They have also proposed an “Employee Health Information Privacy Act” (EHIPA) to tackle unscrupulous data transactions by third–parties, which could help mitigate some of the challenges to PSAI that senses phenomena exclusive of the workplace [6]. Such propositions are certainly a step in the right direction but are centered on confining the flow of the data, i.e., limiting whom they go to, but not how they use it. The algorithmic element of PSAI makes its mechanics elusive, and therefore traditional auditing approaches will be lacking.
Assessing the impacts of PSAI despite the black box. Many PSAI technologies are shrouded as “black–boxes” and this opacity supports certain folk theories regarding what these systems are capable of [49]. It is well known that explainability of machine–learning and AI systems is a hard problem, but we believe adding regulation can motivate developers of PSAI to at least account for the information flows and describe them along the dimensions of contextual norms in information work. We can follow the idea of Model Cards proposed by Mitchell et al. [97], to document intended usage of PSAI. For instance, developers might need to expand on how the algorithmic inferences could be consequential to an IW’s employment with explicitly defined entry points for human–reappraisal and stakeholder involvement. Similarly, they could be required to disclose which aspects of wellbeing and performance are ignored by the system (e.g., “this PSAI cannot be used to infer your team management skills” or “this PSAI is not appropriate for communication–driven roles”).
Assessing PSAI within socio-economic context.Grill and Andalibi had called to increase the visibility of the social impacts of algorithmic phenotyping [49]. Contemporary research has already raised the concerns surrounding the social dynamics of emotional recognition [66], a well–documented manifestations of PSAI. Ideally, regulations must protect against foreseeable but anomalous economic scenarios that compel organizational supervision. For instance, in the future, economic downturn can be used to justify the diversion of PSAI inferences for operational decisions such as downsizing. Organizations can argue these situations are analogous to PSAI for public–health [135]. These crisis scenarios require regulation the most. Auditors should be able to protect certain jobs that are considered more precarious from PSAI, e.g., contractual positions. At the same time, certain sectors might be deemed too austere for responsible utilization of PSAI. Sectors that lack sufficient alternative job openings lead to austerity that makes it illusory for workers to improve with PSAI or even meaningfully reject any enforced PSAI. Future research can illuminate other sociological factors that inform protective regulation. In this way, thinking of more worker-centric PSAI could ensure that individual liberties are upheld despite the necessary supervision required for organizational progress.

5.2.2 Role of Researchers.

A harder question to answer is defining changes in the research of PSAI for workers. The path forward needs insiders to embrace reflexivity on our own methods but also adopt calls for more human–centered approaches from “outsiders” who have critiqued this research. Oftentimes the scientific advancement of technology–supported HRM hinges on capturing and modeling otherwise unseen or ignored phenomena [91, 94, 136]. Sometimes this research is presented as morally indifferent to misuse. This indifference starts eroding when researchers start intersecting with more societal disciplines, such as HCI and CSCW. Yet, research projects and papers that do anticipate misuse are often limited to statements that urge for consented usage. Unfortunately, data subjects might not be able to make informed decisions without the appropriate disclosures of PSAI. Despite our worker–centric approach, in a technology–forward environment an IW’s judgments could be clouded by their personal theories of AI as well as folk theories about the inner functioning of AI-based systems [49]. It is only when we appreciate external critique can we understand the risks of perpetuating PSAI, such as the potential for self–harm [64].
Participatory contributions to development of PSAI. The bare minimum would be to include reflective discussions based on the norms of information flow among information workers (or more specific norms suited to their subpopulation). A more worker–centric approach would be to embed qualitative methods such as the scenario–based interviews we conducted as a formative evaluation. Ideally, researchers should have IWs participate in the entire research life–cycle, drawing upon principles and ideas from participatory action research [95]. Even before IRB reviews, study protocols could be informed with feedback from IWs to understand if appropriate measures or phenomena are the input for PSAI. Later, models can be validated through a participatory lens where co–researcher IWs can vet the practical value or harms or potential harms. Studies like WeBuildAI [79] already provide some framework for participatory algorithmic decision-making. Future work should expand this to algorithmic phenotyping.
External feedback for research on PSAI. Quantitative researchers also need to understand that participatory methods and qualitative evaluations will not create a universally accepted instance of PSAI. As Calacci notes that participatory algorithm design for workers might not be able to reconcile multiple conflicting stakeholders but could at least ensure that normative expectations are not breached [18]. In practice, many researchers innovating new PSAI do not work on recruiting, data acquisition, or participant communication. After all, research on PSAI is often propelled by datasets of behavioral data because these are practical and desirable to support scientific replicability and reproducibility. However, these data also distance researchers from the data–subjects, and in some cases, may lead to dehumanized conceptions of data–subjects and donors as simply “training data” or “numbers” [22]. To mitigate the impersonal relationship between researchers and data-subjects, we might consider setting up an independent advisory board formed of subjects and outsiders. Overall, increasing worker-centered research on PSAI can bridge this gap and produce more sensitive and humane systems to improve prosperity of IWs.

5.3 Limitations and Future Work

This paper is focused on PSAI for evaluating the performance and evaluation of information workers. PSAI can be used in other kinds of work with different set of demands too. Passive sensing is already used for tracking tasks in gig work and logistics, so we are not far from witnessing algorithmic inferences of abstract constructs such as wellbeing and performance. PSAI could even be adapted to understand effective work practices for freelance. Each of these spaces will have its own unique set of paradigms, and therefore to maintain Contextual Integrity (CI) we need to study those specific contexts of PSAI. The CI framework itself has inherent limitations. Essentially, CI helps test if “presumptions are in favor of the status quo” [102]. By definition, CI does not challenge the status quo. It does not explicitly describe moral or political judgments that might be essential to understand actual use of technologies [102]. Our study defines the bounds that can reflect whether PSAI is empowering or punitive, but future work needs to take more radical approaches to evaluate empowering designs of PSAI. Moreover, these norms are likely to change over time, for example, when the scenarios we presented become reality. This presents opportunities to revisit certain contexts or compare different contexts to information work and find more transferable principles to govern PSAI.
Another constraint of our study is our focus on the IW’s perspective because they were the data–subjects of PSAI and the weakest within the power asymmetry of work. However, it is undeniable that several other stakeholder perspectives need to be considered to make PSAI for workers meaningful. While self–reflection and self–management can be powerful, other stakeholders like coworkers, managers, wellness officers, and even family members could have different perspectives on using PSAI.
Scenario based interviews have been commonly used in human–AI contexts and still find favor in recent research [100, 108]. However, the static nature of scenarios do not express all the practical realities of interacting with technology. Our research assumed the PSAI was capable of producing accurate insights, but other scenario–based studies have found accuracy of sensing impacts acceptance [1]. Undoubtedly, a naturalistic investigation with an actual PSAI might be able to reveal a lot more. Yet, such experimental approaches foster many ethical pitfalls. Especially in the work context, it is challenging to practically realize such a study without disrupting actual work. Having said that, we encourage field studies with PSAI that involves some form of contextual inquiry with real technology. For such research investigations, our findings can provide a guideline to anticipate and protect participant interests.
Finally, although we present descriptive guidelines to inform better PSAI, we need quantitative design experiments to find a robust set of heuristics to aid more worker-centric design decisions for information flows like PSAI.

6 Conclusion

Passive sensing can be a powerful tool in explaining human behavior and in enabling AI inferences of performance and wellbeing. The use of such algorithmic evaluations for HRM in information work may not be widespread but is on the horizon. By investigating worker perspectives, our research discovers the norms that Passive Sensing enabled AI needs to adhere to maintain contextual integrity, while inferring effectiveness of Information Workers. We highlight factors specific to information work that can inspire appropriate information flows of evaluating IWs with PSAI and appropriate methods of sharing these information flows with others. This study thus helps to envision new worker-centric implementations of PSAI that do not breach their self-interest and dignity while also promoting their prosperity.

Acknowledgments

This research was supported in part by Cisco. We thank Anind Dey, Shamsi Iqbal, Sauvik Das, and Thomas Plötz for contributing to the conceptualization of a worker–centered formative study. We really appreciate our family and friends for helping publicize the study across their networks. Additionally, we are grateful to the members of the Social Dynamics & Wellbeing lab and the Ubiquitous Computing Group at Georgia Institute of Technology for their assistance, guidance, and feedback.

Footnote

1
Pronounced as “Psy”, as in “Psych”

Supplementary Material

MP4 File (3544548.3581376-talk-video.mp4)
Pre-recorded Video Presentation

References

[1]
Martin Abraham, Cornelia Niessen, Claus Schnabel, Kerstin Lorek, Veronika Grimm, Kathrin Möslein, and Matthias Wrede. 2019. Electronic monitoring at work: The role of attitudes, functions, and perceived control for the acceptance of tracking technologies. Human Resource Management Journal 29, 4 (2019), 657–675.
[2]
ActivTrak. 2022. https://www.activtrak.com/. Accessed: 2022-09-01.
[3]
John G Adair. 1984. The Hawthorne effect: a reconsideration of the methodological artifact.Journal of applied psychology 69, 2 (1984), 334.
[4]
Daniel A Adler, Emily Tseng, Khatiya C Moon, John Q Young, John M Kane, Emanuel Moss, David C Mohr, and Tanzeem Choudhury. 2022. Burnout and the Quantified Workplace: Tensions around Personal Sensing Interventions for Stress in Resident Physicians. (2022).
[5]
Philip E Agre. 1994. Surveillance and capture: Two models of privacy. The information society 10, 2 (1994), 101–127.
[6]
Ifeoma Ajunwa, Kate Crawford, and Jason Schultz. 2017. Limitless worker surveillance. Calif. L. Rev. 105(2017), 735.
[7]
Bobby Allen. 2020. Your Boss Is Watching You: Work-From-Home Boom Leads To More Surveillance. https://www.npr.org/2020/05/13/854014403/your-boss-is-watching-you-work-from-home-boom-leads-to-more-surveillance(2020).
[8]
Antonio Aloisi and Elena Gramano. 2019. Artificial intelligence is watching you at work: Digital surveillance, employee monitoring, and regulatory issues in the EU context. Comp. Lab. L. & Pol’y J. 41 (2019), 95.
[9]
Ron Amadeo. 2022. Google CEO Sundar Pichai says productivity is “not where it needs to be”. https://arstechnica.com/gadgets/2022/08/google-ceo-calls-for-a-more-focused-and-efficient-google/
[10]
My Analytics. 2022. https://docs.microsoft.com/en-us/viva/insights/personal/use/dashboard-2. Accessed: 2022-09-01.
[11]
Anne Archambault and Jonathan Grudin. 2012. A longitudinal study of facebook, linkedin, & twitter use. In Proceedings of the SIGCHI conference on human factors in computing systems. 2741–2750.
[12]
Kirstie Ball. 2010. Workplace surveillance: An overview. Labor History 51, 1 (2010), 87–106.
[13]
Michael Barbaro. 2022. The Rise of the Workplace Surveillance. https://www.nytimes.com/2022/08/24/podcasts/the-daily/workplace-surveillance-productivity-tracking.html?showTranscript=1
[14]
Jeremy Bentham. 1791. Panopticon: or, The inspection-house. Containing the idea of a new principle of construction applicable to any sort of establishment, in which persons of any description are to be kept under inspection, etc. Thomas Byrne.
[15]
Roni Berger. 2015. Now I see it, now I don’t: Researcher’s position and reflexivity in qualitative research. Qualitative research 15, 2 (2015), 219–234.
[16]
PM Blau. 1964. 1964 Exchange and power in social life. New York: Wiley. (1964).
[17]
Virginia Braun and Victoria Clarke. 2006. Using thematic analysis in psychology. Qualitative research in psychology 3, 2 (2006), 77–101.
[18]
Dan Calacci. 2022. Organizing in the End of Employment: Information Sharing, Data Stewardship, and Digital Workerism. In 2022 Symposium on Human-Computer Interaction for Work. 1–9.
[19]
Marta B Calas and Linda Smircich. 1999. Past postmodernism? Reflections and tentative directions. Academy of management review 24, 4 (1999), 649–672.
[20]
Ralph Catalano. 1979. Health, behavior and the community: An ecological perspective. Pergamon Press New York.
[21]
Electronic Privacy Information Center. 2022. Workplace Privacy. https://archive.epic.org/privacy/workplace/(2022).
[22]
Stevie Chancellor, Eric PS Baumer, and Munmun De Choudhury. 2019. Who is the" human" in human-centered machine learning: The case of predicting mental health from social media. Proceedings of the ACM on Human-Computer Interaction 3, CSCW(2019), 1–32.
[23]
Stephen Chen. 2021. Chinese construction firms using AI to monitor workers’ safety … but also to spot ‘loiterers’. https://www.scmp.com/news/china/science/article/3091738/chinese-construction-firms-using-ai-monitor-workers-safety-also
[24]
Janghee Cho and Stephen Voida. 2020. Envisioning new productivity tools for domestic information work environments. (2020).
[25]
Peter Conrad. 1987. Wellness in the work place: Potentials and pitfalls of work-site health promotion. The Milbank Quarterly(1987), 255–275.
[26]
Victor P Cornet and Richard J Holden. 2018. Systematic review of smartphone-based passive sensing for health and wellbeing. Journal of biomedical informatics 77 (2018), 120–132.
[27]
Andy Crabtree and Richard Mortier. 2015. Human data interaction: historical lessons from social studies and CSCW. In ECSCW 2015: Proceedings of the 14th European Conference on Computer Supported Cooperative Work, 19-23 September 2015, Oslo, Norway. Springer, 3–21.
[28]
Corentin Curchod, Gerardo Patriotta, Laurie Cohen, and Nicolas Neysen. 2020. Working for an algorithm: Power asymmetries and agency in online work settings. Administrative Science Quarterly 65, 3 (2020), 644–676.
[29]
Vedant Das Swain, Victor Chen, Shrija Mishra, Stephen M Mattingly, Gregory D Abowd, and Munmun De Choudhury. 2022. Semantic Gap in Predicting Mental Wellbeing through Passive Sensing. In CHI Conference on Human Factors in Computing Systems. 1–16.
[30]
Vedant Das Swain, Javier Hernandez, Brian Houck, Koustuv Saha, Jina Suh, Ahad Chaudhry, Tenny Cho, Wendy Guo, Shamsi T Iqbal, and Mary Czerwinski. 2023. Focused Time Saves Nine: Evaluating Computer–Assisted Protected Time for Hybrid Information Work. In CHI Conference on Human Factors in Computing Systems.
[31]
V Das Swain, H Kwon, S Sargolzaei, B Saket, M Bin Morshed, K Tran, D Patel, Y Tian, J Philipose, Y Cui, 2020. Leveraging WiFi Network Logs to Infer Student Collocation and its Relationship with Academic Performance. arXiv e-prints (2020), arXiv–2005.
[32]
Vedant Das Swain, Manikanta D Reddy, Kari Anne Nies, Louis Tay, Munmun De Choudhury, and Gregory D Abowd. 2019. Birds of a Feather Clock Together: A Study of Person-Organization Fit Through Latent Activity Routines. Proceedings of the ACM on Human-Computer Interaction 3, CSCW(2019), 1–30.
[33]
Vedant Das Swain, Koustuv Saha, Gregory D Abowd, and Munmun De Choudhury. 2020. Social Media and Ubiquitous Technologies for Remote Worker Wellbeing and Productivity in a Post-Pandemic World. In 2020 IEEE Second International Conference on Cognitive Machine Intelligence (CogMI). IEEE, 121–130.
[34]
Vedant Das Swain, Koustuv Saha, Hemang Rajvanshy, Anusha Sirigiri, Julie M. Gregg, Suwen Lin, Gonzalo J. Martinez, Stephen M. Mattingly, Shayan Mirjafari, Raghu Mulukutla, and et al.2019. A Multisensor Person-Centered Approach to Understand the Role of Daily Activities in Job Performance with Organizational Personas. Proc. ACM IMWUT (2019).
[35]
Vedant Das Swain, Koustuv Saha, Manikanta D Reddy, Hemang Rajvanshy, Gregory D Abowd, and Munmun De Choudhury. 2020. Modeling organizational culture with workplace experiences shared on glassdoor. In Proceedings of the 2020 CHI conference on human factors in computing systems. 1–15.
[36]
Vedant Das Swain, Shane Williams, Adam Fourney, and Shamsi T Iqbal. 2022. Two Birds with One Phone: The Role of Mobile Use in the Daily Practices of Remote Information Work. In 2022 Symposium on Human-Computer Interaction for Work. 1–8.
[37]
Vedant Das Swain, Jiajia Xie, Maanit Madan, Sonia Sargolzaei, James Cai, Munmun De Choudhury, Gregory D Abowd, Lauren N Steimle, and B Aditya Prakash. 2021. Empirical networks for localized COVID-19 interventions using WiFi infrastructure at university campuses. medRxiv (2021), 2021–03.
[38]
Scott Davidoff, Min Kyung Lee, Anind K Dey, and John Zimmerman. 2007. Rapidly exploring application design through speed dating. In International conference on ubiquitous computing. Springer, 429–446.
[39]
Jean-François De Moya and Jessie Pallud. 2020. From panopticon to heautopticon: A new form of surveillance introduced by quantified-self practices. Information Systems Journal 30, 6 (2020), 940–976.
[40]
Time Doctor. 2022. https://www.timedoctor.com/. Accessed: 2022-09-01.
[41]
Peter F Drucker. 1999. Knowledge-worker productivity: The biggest challenge. California management review 41, 2 (1999), 79–94.
[42]
Philip Ebert and Wolfgang Freibichler. 2017. Nudge management: applying behavioural science to increase knowledge worker productivity. Journal of organization Design 6, 1 (2017), 1–6.
[43]
Tiantian Feng, Brandon M Booth, Brooke Baldwin-Rodríguez, Felipe Osorno, and Shrikanth Narayanan. 2021. A multimodal analysis of physical activity, sleep, and work shift in nurses with wearable sensor data. Scientific reports 11, 1 (2021), 1–12.
[44]
Michel Foucault. 1975. Discipline and punish. A. Sheridan, Tr., Paris, FR, Gallimard(1975).
[45]
Freespace. 2022. https://www.afreespace.com/solutions/enabling-hybrid-working/#sensors. Accessed: 2022-09-01.
[46]
Kristina Gligorić, Ryen W White, Emre Kiciman, Eric Horvitz, Arnaud Chiolero, and Robert West. 2021. Formation of social ties influences food choice: A campus-wide longitudinal study. Proceedings of the ACM on Human-Computer Interaction 5, CSCW1(2021), 1–25.
[47]
Nicholas Gondor. 2022. BlackRock CEO Larry Fink thinks he has a solution to inflation: Bring people back to the office. https://fortune.com/2022/09/07/blackrock-ceo-larry-fink-remote-work-inflation-labor-productivity/
[48]
Karen Gregory. 2021. ’Worker Data Science’ Can Teach Us How to Fix the Gig Economy. https://www.wired.com/story/labor-organizing-unions-worker-algorithms/(2021).
[49]
Gabriel Grill and Nazanin Andalibi. 2022. Attitudes and Folk Theories of Data Subjects on Transparency and Accuracy in Emotion Recognition. Proceedings of the ACM on Human-Computer Interaction 6, CSCW1(2022), 1–35.
[50]
Kaely Hall, Dong Whi Yoo, Wenrui Zhang, Mehrab Bin Morshed, Vedant Das Swain, Gregory D Abowd, Munmun De Choudhury, Alex Endert, John Stasko, and Jennifer G Kim. 2022. Supporting the Contact Tracing Process with WiFi Location Data: Opportunities and Challenges. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems. 1–14.
[51]
Daniel Hanley and Sally Hubbard. 2020. Eyes Everywhere: Amazon’s Surveillance Infrastructure and Revitalizing Worker Power. Open Markets Institute(2020).
[52]
Mikael Holmqvist and Christian Maravelias. 2010. Managing healthy organizations: Worksite health promotion and the new self-management paradigm. Routledge.
[53]
Humanyze. 2022. https://humanyze.com/. Accessed: 2022-09-01.
[54]
Veikko Ikonen and Katja Rentto. 2002. Scenario evaluations for ubiquitous computing-Stories come true?. In Workshop on User-Centered Evaluation of Ubiquitous Computing Application.
[55]
Thomas R Insel. 2017. Digital phenotyping: technology for a new science of behavior. Jama 318, 13 (2017), 1215–1216.
[56]
Viva Insights. 2022. https://techcommunity.microsoft.com/t5/microsoft-viva-blog/daily-briefing-and-myanalytics-branding-updates-to-reflect/ba-p/2681246. Accessed: 2022-09-01.
[57]
Interguard. 2022. https://www.interguardsoftware.com/employee-monitoring-screenshots/. Accessed: 2022-09-01.
[58]
Lucas D Introna. 2000. Workplace surveillance, privacy and distributive justice. Acm Sigcas Computers and Society 30, 4 (2000), 33–39.
[59]
Sachin H Jain, Brian W Powers, Jared B Hawkins, and John S Brownstein. 2015. The digital phenotype. Nature biotechnology 33, 5 (2015), 462–463.
[60]
Arindam Jati, Amrutha Nadarajan, Raghuveer Peri, Karel Mundnich, Tiantian Feng, Benjamin Girault, and Shrikanth Narayanan. 2021. Temporal dynamics of workplace acoustic scenes: Egocentric analysis and prediction. IEEE/ACM Transactions on Audio, Speech, and Language Processing 29 (2021), 756–769.
[61]
Michael C Jensen. 1983. Organization theory and methodology. Accounting review (1983), 319–339.
[62]
Xiaodong Jiang, Jason I Hong, and James A Landay. 2002. Approximate information flows: Socially-based modeling of privacy in ubiquitous computing. In International Conference on Ubiquitous Computing. Springer, 176–193.
[63]
Jodi Kantor and Arya Sundaram. 2022. The Rise of the Worker Productivity Score. https://www.nytimes.com/interactive/2022/08/14/business/worker-productivity-tracking.html
[64]
Shivani Kapania, Oliver Siy, Gabe Clapper, Azhagu Meena SP, and Nithya Sambasivan. 2022. ” Because AI is 100% right and safe”: User Attitudes and Sources of AI Authority in India. In CHI Conference on Human Factors in Computing Systems. 1–18.
[65]
Esther Kaplan. 2015. The spy who fired me: The human costs of workplace monitoring. Harper’s Magazine 1135(2015).
[66]
Harmanpreet Kaur, Daniel McDuff, Alex C Williams, Jaime Teevan, and Shamsi T Iqbal. 2022. “I Didn’t Know I Looked Angry”: Characterizing Observed Emotion and Reported Affect at Work. In CHI Conference on Human Factors in Computing Systems. 1–18.
[67]
Harmanpreet Kaur, Alex C Williams, Daniel McDuff, Mary Czerwinski, Jaime Teevan, and Shamsi T Iqbal. 2020. Optimizing for happiness and productivity: Modeling opportune moments for transitions and breaks at work. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. 1–15.
[68]
Katherine C Kellogg, Melissa A Valentine, and Angele Christin. 2020. Algorithms at work: The new contested terrain of control. Academy of Management Annals 14, 1 (2020), 366–410.
[69]
Kevin Kelly and Gary Wolf. 2007. What is the quantified self. The Quantified Self 5(2007), 2007.
[70]
Vera Khovanskaya, Lynn Dombrowski, Jeffrey Rzeszotarski, and Phoebe Sengers. 2019. The Tools of Management: Adapting Historical Union Tactics to Platform-Mediated Labor. Proceedings of the ACM on Human-Computer Interaction 3, CSCW(2019), 1–22.
[71]
Dongyeon Kim, Kyuhong Park, Yongjin Park, and Jae-Hyeon Ahn. 2019. Willingness to provide personal information: Perspective of privacy calculus in IoT services. Computers in Human Behavior 92 (2019), 273–281.
[72]
Carol Collier Kuhlthau. 1999. The role of experience in the information search process of an early career information worker: Perceptions of uncertainty, complexity, construction, and sources. Journal of the American Society for information Science 50, 5(1999), 399–412.
[73]
Dany Lacombe. 1996. Reforming Foucault: a critique of the social control thesis. British Journal of Sociology(1996), 332–352.
[74]
Benjamin Laker. 2022. The Great Resignation And Great Talent Migration. https://www.forbes.com/sites/benjaminlaker/2022/01/26/the-great-resignation-and-great-talent-migration
[75]
Marc Langheinrich. 2001. Privacy by design—principles of privacy-aware ubiquitous systems. In International conference on ubiquitous computing. Springer, 273–291.
[76]
Thomas B Lawrence, Monika I Winn, and P Devereaux Jennings. 2001. The temporal dynamics of institutionalization. Academy of management review 26, 4 (2001), 624–644.
[77]
Laurie Thomas Lee. 1994. Watch Your E-mail-Employee E-Mail Monitoring and Privacy Law in the Age of the Electronic Sweatshop. J. Marshall L. Rev. 28(1994), 139.
[78]
Min Kyung Lee. 2018. Understanding perception of algorithmic decisions: Fairness, trust, and emotion in response to algorithmic management. Big Data & Society 5, 1 (2018), 2053951718756684.
[79]
Min Kyung Lee, Daniel Kusbit, Anson Kahng, Ji Tae Kim, Xinran Yuan, Allissa Chan, Daniel See, Ritesh Noothigattu, Siheon Lee, Alexandros Psomas, 2019. WeBuildAI: Participatory framework for algorithmic governance. Proceedings of the ACM on Human-Computer Interaction 3, CSCW(2019), 1–35.
[80]
Paul Leonardi and Noshir Contractor. 2018. Better people analytics. Harvard Business Review 96, 6 (2018), 70–81.
[81]
Ian Li, Anind Dey, and Jodi Forlizzi. 2010. A stage-based model of personal informatics systems. In Proceedings of the SIGCHI conference on human factors in computing systems. 557–566.
[82]
Linda Little, Stephen Marsh, and Pam Briggs. 2007. Trust and privacy permissions for an ambient world. In Trust in e-services: Technologies, Practices and Challenges. IGI Global, 259–292.
[83]
Linda Little, Elizabeth Sillence, and Pam Briggs. 2009. Ubiquitous systems and the family: thoughts about the networked home. In Proceedings of the 5th Symposium on Usable Privacy and Security. 1–9.
[84]
Steve Lohr. 2013. Big data, trying to build better workers. The New York Times 21(2013).
[85]
Michele Loi. 2019. The digital phenotype: A philosophical and ethical exploration. Philosophy & Technology 32, 1 (2019), 155–171.
[86]
Nianlong Luo, Xunhua Guo, Benjiang Lu, and Guoqing Chen. 2018. Can non-work-related social media use benefit the company? A study on corporate blogging and affective organizational commitment. Computers in Human Behavior 81 (2018), 84–92.
[87]
Ivan Manokha. 2017. Why the rise of wearable tech to monitor employees is worrying. The Conversation 3(2017).
[88]
Ivan Manokha. 2020. The implications of digital employee monitoring and people analytics for power relations in the workplace. Surveillance and Society 18, 4 (2020).
[89]
Gloria Mark, Mary Czerwinski, Shamsi Iqbal, and Paul Johns. 2016. Workplace indicators of mood: Behavioral and cognitive correlates of mood among information workers. In Proceedings of the 6th International Conference on Digital Health Conference. 29–36.
[90]
Gloria Mark, Shamsi Iqbal, Mary Czerwinski, and Paul Johns. 2014. Capturing the mood: facebook and face-to-face encounters in the workplace. In Proceedings of the 17th ACM conference on Computer supported cooperative work & social computing. ACM, 1082–1094.
[91]
Gloria Mark, Shamsi T Iqbal, Mary Czerwinski, and Paul Johns. 2014. Bored mondays and focused afternoons: the rhythm of attention and online activity in the workplace. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. 3025–3034.
[92]
Gloria Mark, Shamsi T Iqbal, Mary Czerwinski, Paul Johns, Akane Sano, and Yuliya Lutchyn. 2016. Email duration, batching and self-interruption: Patterns of email use on productivity and stress. In Proceedings of the 2016 CHI conference on human factors in computing systems. 1717–1728.
[93]
Karl Marx. 1945. Capital: A critique of political economy. Vol. II. (1945).
[94]
Stephen M. Mattingly, Julie M. Gregg, Pino Audia, Ayse Elvan Bayraktaroglu, Andrew T. Campbell, Nitesh V. Chawla, Vedant Das Swain, Munmun De Choudhury, Sidney K. D’Mello, Anind K. Dey, and et al.2019. The Tesserae Project: Large-Scale, Longitudinal, In Situ, Multimodal Sensing of Information Workers. In CHI Ext. Abstracts.
[95]
Alice McIntyre. 2007. Participatory action research. Sage Publications.
[96]
Shayan Mirjafari, Kizito Masaba, Ted Grover, Weichen Wang, Pino Audia, Andrew T Campbell, Nitesh V Chawla, Vedant Das Swain, Munmun De Choudhury, Anind K Dey, 2019. Differentiating Higher and Lower Job Performers in the Workplace Using Mobile Sensing. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 3, 2 (2019), 37.
[97]
Margaret Mitchell, Simone Wu, Andrew Zaldivar, Parker Barnes, Lucy Vasserman, Ben Hutchinson, Elena Spitzer, Inioluwa Deborah Raji, and Timnit Gebru. 2019. Model cards for model reporting. In Proceedings of the conference on fairness, accountability, and transparency. 220–229.
[98]
Subigya Nepal, Gonzalo J Martinez, Shayan Mirjafari, Stephen Mattingly, Vedant Das Swain, Aaron Striegel, Pino G Audia, and Andrew T Campbell. 2021. Assessing the Impact of Commuting on Workplace Performance Using Mobile Sensing. IEEE Pervasive Computing 20, 4 (2021), 52–60.
[99]
Nathan Newman. 2017. Reengineering workplace bargaining: how big data drives lower wages and how reframing labor law can restore information equality in the workplace. U. Cin. L. Rev. 85(2017), 693.
[100]
Joshua Newn, Ryan M Kelly, Simon D’Alfonso, and Reeva Lederman. 2022. Examining and Promoting Explainable Recommendations for Personal Sensing Technology Acceptance. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 6, 3 (2022), 1–27.
[101]
Jennifer Nicholas, Katie Shilton, Stephen M Schueller, Elizabeth L Gray, Mary J Kwasny, David C Mohr, 2019. The role of data type and recipient in individuals’ perspectives on sharing passively collected smartphone data for mental health: Cross-sectional questionnaire study. JMIR mHealth and uHealth 7, 4 (2019), e12578.
[102]
Helen Nissenbaum. 2004. Privacy as contextual integrity. Wash. L. Rev. 79(2004), 119.
[103]
G Daryl Nord, Tipton F McCubbins, and Jeretta Horn Nord. 2006. E-monitoring in the workplace: privacy, legislation, and surveillance software. Commun. ACM 49, 8 (2006), 72–77.
[104]
Ogbole O Ogancha. 2019. Power Asymmetry and the Quest for Inclusiveness in the Workplace. American Economic Review 57 (2019), 62.
[105]
Daniel Olguín Olguín and Alex Sandy Pentland. 2008. Social sensors for automatic data collection. AMCIS 2008 Proceedings(2008), 171.
[106]
Jukka-Pekka Onnela and Scott L Rauch. 2016. Harnessing smartphone-based digital phenotyping to enhance behavioral and mental health. Neuropsychopharmacology 41, 7 (2016), 1691–1696.
[107]
Marco Ortu, Giuseppe Destefanis, Bram Adams, Alessandro Murgia, Michele Marchesi, and Roberto Tonelli. 2015. The jira repository dataset: Understanding social aspects of software development. In Proceedings of the 11th international conference on predictive models and data analytics in software engineering. 1–4.
[108]
Hyanghee Park, Daehwan Ahn, Kartik Hosanagar, and Joonhwan Lee. 2021. Human-AI Interaction in Human Resource Management: Understanding Why Employees Resist Algorithmic Evaluation at Workplaces and How to Mitigate Burdens. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1–15.
[109]
Marshall Scott Poole and Andrew H Van de Ven. 1989. Using paradox to build management and organization theories. Academy of management review 14, 4 (1989), 562–578.
[110]
John Rawls. 2004. A theory of justice. In Ethics. Routledge, 229–234.
[111]
RemoteDesk. 2022. https://www.remotedesk.com/solutions/webcam-monitoring. Accessed: 2022-09-01.
[112]
Xipei Ren, Bin Yu, Yuan Lu, and Aarnout Brombacher. 2018. Exploring cooperative fitness tracking to encourage physical activity among office workers. Proceedings of the ACM on Human-Computer Interaction 2, CSCW(2018), 1–20.
[113]
Sebastien Ricard. 2020. The Year Of The Knowledge Worker, https://www.forbes.com/sites/forbestechcouncil/2020/12/10/the-year-of-the-knowledge-worker/?sh=659e572f7fbb. Accessed: 2021-12-07.
[114]
Carsten Röcker. 2009. Acceptance of future workplace systems: how the social situation influences the usage intention of ambient intelligence technologies in work environments. In Proceedings of the 9th International Conference on Work With Computer Systems. 9–14.
[115]
John Rooksby, Alistair Morrison, and Dave Murray-Rust. 2019. Student perspectives on digital phenotyping: The acceptability of using smartphone data to assess mental health. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. 1–14.
[116]
Maria Rotundo and Paul R Sackett. 2002. The relative importance of task, citizenship, and counterproductive performance to global ratings of job performance: A policy-capturing approach.Journal of applied psychology 87, 1 (2002), 66.
[117]
Koustuv Saha, Ted Grover, Stephen M Mattingly, Vedant Das Swain, Pranshu Gupta, Gonzalo J Martinez, Pablo Robles-Granda, Gloria Mark, Aaron Striegel, and Munmun De Choudhury. 2021. Person-centered predictions of psychological constructs with social media contextualized by multimodal sensing. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 5, 1 (2021), 1–32.
[118]
Koustuv Saha, Manikanta D Reddy, Stephen M Mattingly, Edward Moskal, Anusha Sirigiri, and Munmun De Choudhury. 2019. LibRA : On LinkedIn based Role Ambiguity and Its Relationship with Wellbeing and Job Performance. Proc. ACM Hum.-Comput. Interact.CSCW (2019).
[119]
Niloufar Salehi, Lilly C Irani, Michael S Bernstein, Ali Alkhatib, Eva Ogbe, and Kristy Milland. 2015. We are dynamo: Overcoming stalling and friction in collective action for crowd workers. In Proceedings of the 33rd annual ACM conference on human factors in computing systems. 1621–1630.
[120]
David EM Sappington. 1991. Incentives in principal-agent relationships. Journal of economic Perspectives 5, 2 (1991), 45–66.
[121]
Nikil Saval. 2014. Cubed: A secret history of the workplace. Anchor Books.
[122]
Florian Schaule, Jan Ole Johanssen, Bernd Bruegge, and Vivian Loftness. 2018. Employing consumer wearables to detect office workers’ cognitive load for interruption management. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 2, 1 (2018), 1–20.
[123]
Andrew Schulman. 2001. The extent of systematic monitoring of employee e-mail and internet use. The Privacy Project, July 9 (2001).
[124]
R Keith Schwer, Michael C Mejza, and Michel Grun-Rehomme. 2010. Workplace violence and stress: The case of taxi drivers. Transportation Journal 49, 2 (2010), 5–23.
[125]
Graham Sewell and James R Barker. 2006. Coercion versus care: Using irony to make sense of organizational surveillance. Academy of Management review 31, 4 (2006), 934–961.
[126]
N Sadat Shami, Jiang Yang, Laura Panc, Casey Dugan, Tristan Ratchford, Jamie C Rasmussen, Yannick M Assogba, Tal Steier, Todd Soule, Stela Lupushor, 2014. Understanding employee social media chatter with enterprise social pulse. In Proc. CSCW.
[127]
Richard Snow. 2013. I invented the modern age: The rise of Henry Ford. Simon and Schuster.
[128]
Susan Leigh Star and James R Griesemer. 1989. Institutional ecology,translations’ and boundary objects: Amateurs and professionals in Berkeley’s Museum of Vertebrate Zoology, 1907-39. Social studies of science 19, 3 (1989), 387–420.
[129]
Luke Stark. 2018. Algorithmic psychometrics and the scalable subject. Social Studies of Science 48, 2 (2018), 204–231.
[130]
FM System. 2022. https://fmsystems.com/products/workplace-analytics/sensor-analytics/. Accessed: 2022-09-01.
[131]
Stefan Tangen. 2005. Demystifying productivity and performance. International Journal of Productivity and performance management (2005).
[132]
Frederick Winslow Taylor. 1919. The principles of scientific management. Harper & brothers.
[133]
Anja Thieme, Danielle Belgrave, and Gavin Doherty. 2020. Machine learning in mental health: A systematic review of the HCI literature to support the development of effective and implementable ML systems. ACM Transactions on Computer-Human Interaction (TOCHI) 27, 5(2020), 1–53.
[134]
Amos Tversky. 1977. Features of similarity.Psychological review 84, 4 (1977), 327.
[135]
Christine Utz, Steffen Becker, Theodor Schnitzler, Florian M Farke, Franziska Herbert, Leonie Schaewitz, Martin Degeling, and Markus Dürmuth. 2021. Apps against the spread: Privacy implications and user acceptance of COVID-19-related smartphone apps on three continents. In Proceedings of the 2021 chi conference on human factors in computing systems. 1–22.
[136]
Benjamin N Waber, Daniel Olguin Olguin, Taemie Kim, and Alex Pentland. 2010. Productivity through coffee breaks: Changing social networks by changing break structure. Available at SSRN 1586375(2010).
[137]
Roy Want, Andy Hopper, Veronica Falcao, and Jonathan Gibbons. 1992. The active badge location system. ACM Transactions on Information Systems (TOIS) 10, 1 (1992), 91–102.
[138]
Mark Weiser. 1991. The Computer for the 21 st Century. Scientific american 265, 3 (1991), 94–105.
[139]
Christine T Wolf. 2019. Explainability scenarios: towards scenario-based XAI design. In Proceedings of the 24th International Conference on Intelligent User Interfaces. 252–257.
[140]
Sarah Woods, Michael Walters, Kheng Lee Koay, and Kerstin Dautenhahn. 2006. Comparing human robot interaction scenarios using live and video based methods: towards a novel methodological approach. In 9th IEEE International Workshop on Advanced Motion Control, 2006. IEEE, 750–755.
[141]
Camellia Zakaria, Rajesh Balan, and Youngki Lee. 2019. StressMon: Scalable Detection of Perceived Stress and Depression Using Passive Sensing of Changes in Work Routines and Group Interactions. Proceedings of the ACM on Human-Computer Interaction 3, CSCW(2019), 1–29.
[142]
Ke Zhou, Marios Constantinides, Sagar Joglekar, and Daniele Quercia. 2022. Predicting Meeting Success With Nuanced Emotions. IEEE Pervasive Computing(2022).

Cited By

View all
  • (2024)Good Intentions, Risky Inventions: A Method for Assessing the Risks and Benefits of AI in Mobile and Wearable UsesProceedings of the ACM on Human-Computer Interaction10.1145/36765078:MHCI(1-28)Online publication date: 24-Sep-2024
  • (2024)The Atlas of AI Incidents in Mobile Computing: Visualizing the Risks and Benefits of AI Gone MobileAdjunct Proceedings of the 26th International Conference on Mobile Human-Computer Interaction10.1145/3640471.3680447(1-6)Online publication date: 21-Sep-2024
  • (2024)Making Data Work CountProceedings of the ACM on Human-Computer Interaction10.1145/36373678:CSCW1(1-26)Online publication date: 26-Apr-2024
  • Show More Cited By

Index Terms

  1. Algorithmic Power or Punishment: Information Worker Perspectives on Passive Sensing Enabled AI Phenotyping of Performance and Wellbeing

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image ACM Conferences
      CHI '23: Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems
      April 2023
      14911 pages
      ISBN:9781450394215
      DOI:10.1145/3544548
      This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives International 4.0 License.

      Sponsors

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 19 April 2023

      Check for updates

      Author Tags

      1. future of work
      2. human resource management
      3. information work
      4. passive sensing
      5. worker wellbeing

      Qualifiers

      • Research-article
      • Research
      • Refereed limited

      Conference

      CHI '23
      Sponsor:

      Acceptance Rates

      Overall Acceptance Rate 6,199 of 26,314 submissions, 24%

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)1,847
      • Downloads (Last 6 weeks)207
      Reflects downloads up to 01 Nov 2024

      Other Metrics

      Citations

      Cited By

      View all
      • (2024)Good Intentions, Risky Inventions: A Method for Assessing the Risks and Benefits of AI in Mobile and Wearable UsesProceedings of the ACM on Human-Computer Interaction10.1145/36765078:MHCI(1-28)Online publication date: 24-Sep-2024
      • (2024)The Atlas of AI Incidents in Mobile Computing: Visualizing the Risks and Benefits of AI Gone MobileAdjunct Proceedings of the 26th International Conference on Mobile Human-Computer Interaction10.1145/3640471.3680447(1-6)Online publication date: 21-Sep-2024
      • (2024)Making Data Work CountProceedings of the ACM on Human-Computer Interaction10.1145/36373678:CSCW1(1-26)Online publication date: 26-Apr-2024
      • (2024)A Systematic Review of Biometric Monitoring in the Workplace: Analyzing Socio-technical Harms in Development, Deployment and UseProceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency10.1145/3630106.3658945(920-932)Online publication date: 3-Jun-2024
      • (2024)Sensible and Sensitive AI for Worker Wellbeing: Factors that Inform Adoption and Resistance for Information WorkersProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642716(1-30)Online publication date: 11-May-2024
      • (2024)Stochastic Machine Witnesses at Work: Today's Critiques of Taylorism are Inadequate for Workplace Surveillance Epistemologies of the FutureProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642206(1-12)Online publication date: 11-May-2024
      • (2024)Trust in AI-assisted Decision Making: Perspectives from Those Behind the System and Those for Whom the Decision is MadeProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642018(1-14)Online publication date: 11-May-2024
      • (2024)The Impact of Social Norms on Hybrid Workers’ Well-Being: A Cross-Cultural Comparison of Japan and the United StatesProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3641928(1-19)Online publication date: 11-May-2024
      • (2024)From Reflection to Action: Enhancing Workplace Well-Being Through Digital SolutionsInteracting with Computers10.1093/iwc/iwae049Online publication date: 26-Oct-2024
      • (2024)E-textiles for emotion interaction: a scoping review of trends and opportunitiesPersonal and Ubiquitous Computing10.1007/s00779-024-01793-w28:3-4(549-577)Online publication date: 1-Aug-2024
      • Show More Cited By

      View Options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      HTML Format

      View this article in HTML Format.

      HTML Format

      Get Access

      Login options

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media