1 Introduction
Industry practitioners, in their pursuit of rapid execution, often overlook the broader context of their product design and its potential human and social implications.
This developer sentiment is perhaps best exemplified by Facebook’s internal motto prior to 2014: “Move fast and break things. The idea is that if you never break anything, you’re probably not moving fast enough” [40]. Frequently, after releasing a new product feature, these companies face significant backlash from users and decide to revoke the feature [
9], not only squandering substantial labor but also damaging their reputations.
One recent example is the attendee attention tracking feature in Zoom [
51].
Zoom is a video conferencing platform which has seen a huge increase in usage and revenue since the beginning of the COVID-19 pandemic [16] and has rapidly iterated its product to accommodate this growing user base. Zoom developed a feature that allowed the host to monitor the
attendees’ attention: if Zoom was not the application in focus on a participant’s computer for over 30 seconds while someone
else was sharing their screen, Zoom showed a clock icon next to the participant’s name in the participant panel. At the end of each meeting, Zoom also generated a report
for the host listing the percentage of time each participant had the presentation window in focus during the meeting (see Figure
1). This feature received significant backlash after launch [
48]. The Zoom team later apologized for falling short of the community’s privacy and security expectations and decided to remove the attention tracker feature permanently [
49].
One important cause for product setbacks like this one is failing to integrate user feedback into the early stages of product design [
14,
25,
35]. Earlier research has explored various techniques for gathering user input on privacy aspects of different product designs [
6,
17,
25]. These methods range from collecting numerical privacy expectation scores from individual users [
25], to identifying privacy norms through the Contextual Integrity privacy framework [
6], to classifying privacy concerns extracted from unstructured text [
17,
20]. However, a relatively under-investigated question is
how developers might effectively leverage actual user feedback to enhance the privacy of end user products.
In this paper, we used Zoom attendee attention tracking as a lens to explore the process of integrating user privacy feedback into software development.
While there exist different frameworks and approaches to privacy, we focused on the users’ privacy expectations [25] in this study, namely user feedback about aspects of a technical feature that contradicted their mental models of how the feature should use their data, as well as how developers utilize those expectations.We sourced user feedback regarding the Zoom attention tracking feature from public forums, extracting a series of user concerns and organizing them into three categories. We then conducted semi-structured interviews with 18 software engineers of varying seniority at small- to large-scale technology companies. In these interviews, we provided an overview of Zoom’s attention tracking feature and its context before asking participants to suggest system design changes. We then presented user critiques to the participants, one category at a time, and asked if they would modify their proposed system designs. Throughout the process, we observed how participants processed user feedback and integrated it into their design decisions.
Our results suggest benefits of utilizing user feedback as a checklist for edge cases and as evidence for high-level organizational privacy decisions. However, we also notice redesigning privacy with user feedback is complicated by some challenges. With this format of presenting user feedback, participants tended to devise incremental front-end adjustments which introduced more complicated designs that didn’t scale well for further user concerns, or participants opted to completely abandon the feature in practice; participants deferred on designing solutions for user feedback they personally didn’t agree with or understand; and participants saw some scope of product design, such as different ways users can apply the product, as outside their perceived primary role, limiting engineering solutions to transparently communicating functionality limitations. We discuss future work and recommendations to better understand how software engineers can utilize user feedback to design privacy into software applications, especially in the early stages of design.
2 Related Work
Our main objective in this paper is to explore the process of integrating user feedback into privacy-aware software development, which builds on literature from two main areas: (1) collecting users’ privacy feedback and (2) understanding developers’ needs in privacy by design.
Collecting Users’ Privacy Feedback. Understanding users’ privacy concerns is crucial for businesses, serving not only as a compliance requirement (e.g., GDPR, CCPA) [
45] but also as a foundational step for building trust between companies and users [
12].
More broadly, previous work stressed the importance of surfacing user feedback for software evolution [18, 47]. Studies have also investigated users’ privacy concerns in different settings, including IoT [
6,
22,
50], online advertising [
23,
46], and mobile apps [
25,
28,
42], aiming to provide practitioners with a foundational understanding for designing their data practices. For example, Lin et al. [
25] revealed that users became notably concerned about their privacy when informed that the Dictionary app accessed their location. Yet, their concerns significantly diminished when it was clarified that the location data was solely for identifying trending words in their vicinity. Indeed, feedback on users’ privacy has been an important motivator for companies to introduce “purpose strings” into today’s permission system [
5].
More recently, researchers have proposed a few ways to collect users’ privacy feedback systematically, tailored for a specific data practice [
6,
17]. For example, Apthorpe et al. [
6] examined a variety of settings, devices, and data types in specific contexts through questions such as, “A sleep monitor records audio of its owner. How acceptable is it for the monitor to send this information to [different recipients]?” Lean Privacy Review [
17] allowed practitioners to gather direct feedback from users for a data practice, in the form of qualitative free-text descriptions and annotated quantitative categories.
Feedback collection is especially important as previous work has highlighted differences between software developers’ privacy expectations and those of the users they developed for [25, 37, 39], implying developers struggle to anticipate users’ needs and expectations on their own.
However, user privacy feedback is not self-executing. It requires developers to translate this feedback into actual system design decisions. Our study in this paper focuses on a relatively under-investigated question: how do developers use this feedback to enhance the privacy of their products? Specifically, we collected public online critiques about a deprecated feature in popular software: Zoom’s attendee attention tracking. We then presented this feedback to software engineers and observed how they utilized the user feedback to improve the problematic feature’s design.
Understanding Developers’ Needs in Privacy by Design. Regulators have embraced “privacy by design” as a critical element of their ongoing revision of current privacy laws [
30]. Previous research has explored challenges for integrating privacy requirements into software design [
2,
3,
24,
35,
43]. For example, Tahaei et al. conducted interviews with Privacy Champions in software development teams and discovered barriers such as negative privacy culture, internal prioritization conflicts, limited tool support, unclear evaluation metrics, and technical complexity [
43]. Other research has focused on engineer attitudes and practices towards privacy [
8,
14,
36], finding engineers face challenges such as lack of perceived responsibility, control, and autonomy; frustrating interactions with legal teams [
8]; limiting discourse of privacy to a data security vocabulary; and external organizational climates limiting privacy practice [
14].
In contrast, we focus on specific and concrete privacy design tasks, where we instruct developers to integrate privacy into software design and observe their process in order to identify the challenges software engineers face. Our study design was in part inspired by Senarath et al.’s study [
35], which asked software engineer participants to design an application for a hypothetical health scenario. During the process, Senarath et al. prompted participants to use multiple privacy frameworks, including Privacy by Design, Data Minimization, Federal Information Processing Standards, and Privacy Impact Assessment, and asked participants to reflect on what they did in the task. Unlike this study, our study is grounded in a real-world data practice and prompts participants with user privacy feedback
, exploring privacy from the viewpoint of end users’ expectations rather than through a specific framework.
3 Pilot Study
To inform the design of our main study, we first conducted formative 45-minute semi-structured interviews to understand how software engineers might reflect on a real-world data practice design to improve its privacy centered around anticipated user concerns and privacy expectations.
Method:
We recruited five software engineers through personal networks for exploratory semi-structured interviews. Inspired by technology probes as a co-designing method [15], we chose the Zoom attendee attention tracking feature as an example to focus design ideas. During the interview, we presented Zoom’s attendee attention tracking feature and invited participants to
consider different user concerns and ideate on different possible designs of the feature to address those concerns through Crazy Eights exercises [
41]
, a design ideation method commonly used in design sprints where participants are encouraged to come up with multiple ideas in a short period of time (in this case, eight ideas in eight minutes), from which one or more ideas can be further developed into a prototype.All participants had mentioned using Zoom as an attendee, host, and presenter in the past, so we also asked participants to informally role-play
these different user types to generate more ideas and user concerns from these different perspectives. Participants were compensated $20 USD in the form of shopping gift cards for their participation.
The interview results were analyzed in an inductive open coding approach: the two lead authors manually and independently coded up responses then discussed them to agree on a selective coding scheme. Results: This pilot study revealed a couple common participant behaviors. Participants’ ideas focused mainly on 1-2 user concerns that they would encounter as attendees in their organizations’ contexts. Four participants focused their ideas around the same user concern of avoiding false negative measurement of attendees’ attention especially with multi-tasking, highlighting the use of multiple screens and passive listening during meetings as common practice. Even after participants role-played as hosts/managers and presenters as well as attendees, their solutions generally centered around the same attendee concerns. For instance, when considering the manager role, one participant mentioned the same concern they considered as an attendee, saying, “So at least at my workplace, I know that the people who should be listening would be listening, so I wouldn’t care if they [the attendees] are switching windows or they’re not paying attention”.
Participants’ ideas were also high-level and lacked engineering actionability. Their suggested changes either involved not using the Zoom attention feature at all, or other signals such as “ using ‘optional’ in calendar to not include people in passive meetings ”, or “ using other interactions such as Q&A during presentations to gauge attentiveness ”. Even when they were asked to role play as other user types, they suggested managers and hosts use broader, more generally “accurate” metrics for tracking participant attentiveness, but were unable to outline clear metric definitions when prompted. Other changes participants suggested added vague incremental front-end polishing to the original user interface; one participant summarized their suggestion to be “ designing the away timer [icon] in a much more sophisticated manner. ” Overall, participants generally focused on one or two limited initial concerns they anticipated based on their own experiences as users and devised solutions that lacked specificity and scope for multiple user types, at least in this rapid ideation framework.
Based on these preliminary findings, we
observed that engineers could benefit from more guidance, potentially in the form of more diverse user perspectives, to devise more effective and coherent privacy-aware solutions.
These results align with previous work suggesting that developers struggle to anticipate user needs and expectations outside of their own experiences [25, 37, 39]. However, we found that participants were able to ideate on a feature such as Zoom attendee attention tracking both as an end user and as a software developer for the product, even if the engineered solutions were a bit limited by their personal user perspective. Thus, we used this pilot study to formulate the methods for our main study
, deciding on real user feedback as a way to hopefully ground participants’ ideas towards more actionable solutions.
5 Results
Our results suggest that integrating user feedback into software development for privacy is beneficial but also fraught with challenges. This section discusses the observed benefits and three challenges in detail.
5.1 Benefits of User Privacy Feedback
User feedback as a checklist for edge cases. User privacy feedback allowed participants to cross-check their assumptions and anticipated concerns. This empowered participants to more confidently and efficiently validate their solutions and make clarifying adjustments in order to address the presented feedback similarly to a checklist. Participants’ reported confidences in their designs also generally increased after seeing user feedback and redesigning further (Figure
5). For example, P5 mentioned that “
being able to compare [my design] against real user feedback made me realize it had accounted for a lot of the original feedback, so then it raised my confidence.” These responses were to the question, “How confident do you feel this product will satisfy user needs and concerns (on a scale of 1 being not at all confident to 5 being extremely confident)?” The difference in confidence between the distribution of responses on initial designs versus the distribution of responses after the third round of feedback redesigns is also statistically significant on a 95% confidence interval (
α = 0.05) based on a Wilcoxon signed-rank test (
p = 0.030).
User feedback as evidence for high-level organizational privacy decisions. Participants recognized user feedback as a catalyst for product design changes at a higher organization level they might not normally be able to achieve on their own. As one participant mentioned, “
It does give some more weight to have direct user feedback that goes along with [design decisions]. At least I feel that’s how it operates in [my company]” (P11). The same participant also mentioned the value of “
some user studies with numbers, because people like numbers. [At my company] it would go a long way to have a more concrete percentage, more than anecdotes.” Other participants echoed similar sentiments, saying, “
I would say those are pretty valid concerns. But do they really matter? I don’t know. So that’s the reason we want some metrics. We invent some quantitative metrics [...] We should listen to everyone but do the right things. And we need more data to do the right things” (P12) and, “
In feature development, it’s worth just breaking out these slices of data and just seeing whether there’s a meaningful behavior difference before making decisions” (P3).
5.2 Challenge 1: Polarized Design Suggestions when Redesigning Privacy with User Feedback
At the outset of the study, we anticipated a primary advantage of user privacy feedback to be the ability to enlighten engineers about user concerns and requirements they might not have initially foreseen and enable them to design scalable privacy solutions. After all, previous work identified that acknowledging the differences between developers’ perceived user privacy expectations and those actually expected by users was necessary for developers to successfully integrate privacy into software development [37]. However, we found that user privacy feedback in the form of raw text was unable to empower participants to devise feature updates which could scale for multiple user concerns. We observed a range of different design solutions (Figures
3,
4), but they were mostly focused around either minor, incremental interface changes (e.g. adding notice/consent) or complete deprecation of the feature.
Incremental interface changes. When participants saw user feedback they hadn’t originally anticipated, they were able to describe incremental technical solutions, usually focused on the front-end user interface. For example, many of these participants mentioned the best solution was to ensure that attendees understood the data being collected about them and that hosts understood the limitations of the data collected and reported (in a similar vein with purpose strings [
12]). These were often in the form of notification interfaces for attendees describing the attention tracking process and “disclaimers” for hosts clarifying suggested usage and best practices for interpreting the reported data (see Figure
3a, 4a). As
an example of messaging for hosts, P6 mentioned, “
Instead of saying [attendees are] not paying attention, say they’re not watching their screen, they could still be listening to you.” In fact, transparency was the most-mentioned category of design suggestions (Table
3), with 16 participants (89%) bringing it up at least once
However, these bandage-like approaches often required many more patches to accommodate other feedback. This buildup of unanticipated changes inadvertently deprioritized usability in some cases. For instance, four participants proposed designs for meeting hosts to customize meeting expectations for different meeting types and different types of attendees (e.g. Figure 3b), but when asked to clarify specifics for the context of attention, a couple mentioned ideas for manually highlighting active times during the meeting to track attention only from certain types of attendee roles determined before the meeting. This type of customizability, while attempting to address a variety of concerns about accuracy in attention tracking, would certainly be a larger overhead for meeting hosts to configure before every meeting they wanted to track.
Participants who devised these patches for specific concerns also expressed lower confidence and some doubts that their solutions could handle all users’ edge cases; as P14 put it, “I still can’t completely foresee how it will be used [...] because of that, I’m not gonna give it a 5 [extremely confident].” P2 even commented, “I think the more and more I get educated on these corner cases, the more thornier it seems.” For these participants, it seemed exposure to novel concerns through the feedback humbled their original ideas and forced them to reconsider their privacy design strategies, while still limiting their solutions to patchwork fixes snowballing with front-end interface updates. Addressing concerns on an individual task level introduced issues of scale and usability.
Feature deprecation. The challenges of redesigning privacy with user feedback frequently drove participants to consider the opposite extreme solution, feature deprecation, echoing Zoom’s actual solution. Four participants consistently asked if they were allowed to deprecate the feature, or if the company was requiring them to maintain the feature. For instance, these participants asked, “Am I being pressured to keep this feature?” (P11) or mentioned “I don’t think I would implement that feature [...] unless I was forced to” (P7).
Another significant reason participants struggled to design a solution and considered abandoning the feature was the seemingly unsolvable conflict between their evident moral ground and the technical feasibility. Several participants expressed hesitancy to add more surveillance methods that “cross the line” for user privacy but struggled to design technical solutions to track attention as a result. For instance, participants mentioned, “The idea of surveillance in general tends not to be a great idea. And most people kinda just dislike that idea off the bat” (P1), “Despite all the attempts to frame it in a positive way, I don’t think people really like being tracked, just as a human nature kind of thing” (P5), and “I think that the act of tracking people inherently makes people suspicious, and I think people are naturally suspicious but have a right to be suspicious, and there will always be some amount of concern about how that data is being used, even if you demonstrate that it’s public and anonymized and aggregated” (P3). This concern led some participants to believe they couldn’t engineer a technical solution, for instance with some concluding, “I don’t know how you would be able to resolve this without even more invasive techniques” (P5), “I don’t think you’ll be able to capture [attention] digitally in any sort of digital platform or environment” (P1), and “I think anything that I would propose would be too invasive of privacy, and I would not be comfortable implementing that” (P7). It’s worth noting that several participants seemed to assume an inherent organizational or business pressure to keep the product feature. As P6 mentioned, “The fact that I built this feature in the first place meant that some other customer of mine asked for it, and if I don’t give them this feature, then they might just go somewhere else.”
Upon closer inspection of the participants’ reasons to deprecate the product, participants also seemed to pull in personal opinions and experiences outside of the displayed user feedback. Before seeing any feedback, P11 even noted, “I don’t think using it is good [...] it should just not exist. Because anything I can think of to, like, improve the quality of data is just more recording and analyzing. And they’re all very subjective to whoever creates the logic [...] so I think there’s a lot of human factor and decision, and I don’t think any of it is reliable and should not be presented to the end user.” As another example, P7 justified deprecating after seeing all three rounds of user feedback but for reasons beyond any specific feedback, mentioning, “I don’t think it effectively tracks performance, and I think it erodes trust and culture within the company. And it would cause micromanaging within the teams. And I think people would try to find ways to gamify the system [...] and I feel like there’d be more politicking involved. I think this would not be good for company culture. Yeah, I foresee it as like a detriment [...] I think it shuts down voices potentially.”
5.3 Challenge 2: Confirmation Bias in Integrating User Privacy Feedback
We had also anticipated user privacy feedback to foster empathy between developers and the end users, but we found that this format of raw user privacy feedback was limited by some apparent confirmation bias. Section 5.1 describes how feedback that aligned with participants’ expectations was effective for boosting participants’ confidences on their designs, but when feedback ran counter to participants’ expectations, participants were much more hesitant to suggest product fixes to address those concerns. Some of these participants deferred evaluation of these concerns to others, such as product managers, to decide on task prioritization for engineers, while a couple of participants plainly refuted the value of addressing those concerns. For instance, for one user comment they disagreed with, P2 decided, “I think I would essentially pass the buck to like PM’s or UX researchers to essentially determine, is that a worthwhile business problem to solve? Like how much value does that generate for our users, whether it’s like they feel safer.” P16 refuted one feedback comment, mentioning, “That just makes this feature redundant. I don’t think this should be addressed [...] No one cares about that. So if I’m a developer at Zoom, I wouldn’t want to implement this feature, because there’s no point.” In general, participants who did not connect with user feedback seemed less motivated to take the initiative to fix them or looked to others to verify if they were worth addressing.
5.4 Challenge 3: Perceived Responsibility in Designing for Privacy
Another common challenge we observed was participants viewing product decision-making as outside of their primary role or acknowledging that outside factors could thwart their efforts to make a difference in their organization, even when presented with user feedback.
Engineering solutions accounting for different user applications of the product were limited to clear communications of feature functionality. There was a common pattern of participants considering some extent of product usage as outside the scope of their software design work. For instance, when considering how this tool would be used by employers for tracking attention of their employees in work meetings, many participants acknowledged how punitive action based on this tool could be seen as unfair and could even be abused to “make arguments in bad faith” (P3), but most of those participants mentioned this consideration as out of scope of the product design or struggled to think of ways to address it, other than to add clarity, such as with the previously-mentioned “disclaimers” about the data for users. For instance, participants mentioned, “You can’t guarantee that companies aren’t gonna [...] use this as justification for axing hybrid or fully remote work, right? Because that’s beyond a Zoom product designer’s control” (P3), “We provide the information, but [...] it’s really up to the meeting user, whatever the context of the meeting is [...] but as for what it means to be accountable, I don’t think the tool needs to answer that” (P1), and “Some things you just can’t control, like which environment [...] and what the employer is using this data for [...] that is the responsibility of the employer and the employee” (P4). Others echoed similar sentiments but mentioned clear communication, both from the product and from intermediate users such as organization heads and meeting hosts, as the best solution for this case, saying, “I don’t think we can address sort of, like, how it should be used. But we can provide enough information and enough clarity on how much information we do provide for users of it to make sort of those decisions kinda outside of the feature” (P1), or even, “Do companies have a right to use this information against you when they’re evaluating a performance? So I think the company should be very clear about this, only then the feature should be enabled” (P16). Other participants stated potential misuse as a factor to simply deprecate the feature, also struggling to think of other design solutions. For instance, P8 mentioned, “We built this tool that the employer may be trying to force on people, and it’s more on the employer for using it a certain way [...] But that raises the question of if this feature should even exist, if it can cause these sorts of controversies.”
In general, these participants did not consider downstream implications for end users as central for their engineering role in product design, though some of them advocated for transparently communicating how the data can be used so that users could address those implications, while others considered removing the feature entirely.
Participants deferred to others to determine how the broader organization balanced feature prioritization with user privacy comfort. A number of participants mentioned a consideration of realistic business factors, namely that the product would likely cater to the highest-paying customers. P3 recognized that for many product decisions, “It depends on who your highest paying customer is and what they want,” and P13 similarly noted, “I think it really ultimately depends on whether or not that’s a feature that the paying customers want to put in, ’cause if you end up putting that in as a feature change, and the people that actually pay for the licenses don’t like it, then there’s no point actually putting it in.” These participants assumed that they as engineers would concede to what the company decided would be best for their paying customers, with a couple participants explicitly mentioning they would prioritize surveying preferences from premium users (or potential paying users) and defer to them to decide on product direction. In this sense, not only did engineers recognize their employer and the end user as stakeholders in the design process, but also the perceived relative importance (in this case, financial weight) of certain users as affecting the product design decisions.
Several other participants also cited external regulations as taking precedence when guiding product decisions. For example, P8 noted, “I would assume that Zoom has some sort of security council and legal counsel [...] to discuss this type of point, because there’s also different countries – I know the European Union has different security laws regarding user data,” and P12 mentioned, “First, we have to make sure we follow the laws, regulations – GDPR, some common legal codes, something like that.” In general, these broader balance considerations of employer and valued customer priorities as well as external regulations were seen as out of scope for these participants in the role of an engineer.
7 Discussion
This study aims to inform the design of a system that can help developers integrate user feedback to improve product design to meet users’ privacy expectations. Our findings indicate that presenting user privacy concerns as raw text feedback to software engineers has limited effectiveness for several reasons. Here we discuss potential solutions and research directions based on our observed challenges.
Addressing polarized designs. In our study, we provided developers with raw feedback text categorized according to ethical principles [29]. However, addressing this varied feedback might necessitate different degrees of modification to the system. In our study, we observed this in participants’ polarized design decisions, which did not always account for inter-dependencies in design and were even contradictory. Only presenting raw feedback text makes it hard for developers to prioritize their fixes as well. Future systems should consider developing a more formalized and standardized protocol for presenting feedback to software engineers, potentially by offering the feedback in a more organized and consolidated format. Addressing confirmation bias.Raw feedback batched in categories may have exposed participants to unexpected or new user concerns, but it could not foster sufficient empathy between developers and end users to overcome developer confirmation bias. This study is not the first to observe personal opinions affecting engineers’ privacy work [
7,
35], so
further research should explore better ways to enable this empathy.
While user feedback in this study seemed effective as a confirmatory medium through which to connect developers with users, more methods could be explored beyond simply surfacing raw feedback, such as incorporating role-play as suggested in prior work [
37],
or using feedback to expose a product’s usefulness and practical results, which have been shown to affect engineers’ intentions to follow privacy engineering methodologies [36].
Addressing perceived responsibility. In our study, we observed that user feedback alone was insufficient for engineers to autono-mously make high-level privacy decisions, but participants mentioned that feedback and quantitative data can inform decisions made by others in their organizations. Data as evidence provides more weight behind decisions and could garner more trust in organizations.
For the participants, privacy was a collaborative effort that required organizational buy-in, as there seemed to be an inherent and sometimes-explicit tension between end user concerns and organizational pressures to maintain an existing product that served business customers. Participants implied broader product decisions were usually organizational- or management-driven, and engineers didn’t have much influence to reconcile their personal viewpoints with the broader organizational goals, but user feedback and, perhaps more strongly, quantitative usage data would empower them to bring up user concerns with management. Future work to prepare engineers for that conversation could empower engineers to promote human-centered privacy design.
8 Future Work
Future work is needed to investigate how to overcome the challenges we observed and potentially take advantage of and expand the benefits we identified. Here we specify relevant research communities and outline other potential future directions based on this study.
Research communities. This area would benefit from further mutual understanding and collaboration between human-computer interaction (HCI) and software engineering communities. For instance, HCI researchers can develop ways of gathering and analyzing user feedback to improve software design, but we found that those efforts will be limited if they don’t account for software process realities, such as perceived responsibility. Similarly, this study can inform software researchers on how to utilize user feedback in software engineering processes, such as with a metrics-based evidence pipeline for engineers to inform higher-level product decisions rather than second-guess their individual role capacity for product changes. This community collaboration is crucial for future work in order to better integrate user feedback into practical software processes, especially for iterative privacy design.
User feedback beyond critiques. Our study focused on constructive critiques and formulated them as feature requests. However, other types of user feedback, such as positive feedback, could also be explored. For example, instead of completely focusing on designs addressing user complaints, noting aspects praised by users in other successful products and transferring them to new product designs is a reasonable approach in product design; indeed, this is a component of competitive analysis in fields such as business [1] and user experience design [33]. Exploring how to surface those positive comments in the context of software development could prove fruitful. Furthermore, future work could explore different relationships between feature requests. For instance, our study revealed polarized designs which inadvertently deprioritized unstated design considerations such as usability; controlling for mutually-exclusive feature requests was out of scope for our study but could inform more effective developer design decisions and explore how engineers might balance potentially conflicting user considerations (such as usability and functionality). In addition, this study surfaced feedback through social media. Other sources and more direct feedback loops, such as targeted in-app surveys, could add insight into the effect of different types of feedback on software development. With recent advancements in generative artificial intelligence, it could also be worthwhile to explore ways of generating simulated user feedback in formats beneficial for product designers.
Scaling real-world user feedback. Building on previous work [
21,
35], our study could potentially be extended to investigate how to form privacy guidelines, especially after identifying challenges engineers faced with designing for privacy from unstructured user feedback.
Historical data on user feedback to inform how privacy guidelines should shift over time, and how to surface that data, could also be informative. Research to understand how user feedback can inform privacy guidelines, such as with identifying loopholes of existing regulations, can bring privacy practice closer to user privacy expectations. This could be achieved systematically by crawling online critiques at scale and analyzing them qualitatively with a bottom-up approach. B Interview Guide
The following is the guide agreed on by the two interviewers for the semi-structured interviews for this paper’s main study. In the spirit of semi-structure, individual interviews varied in interviewer questions and follow-up probes, but the general interview structure followed this guide.
Intro [1 min]
Hello! My name is ____ and I’m doing an interview about workplace surveillance as part of my studies at my institution. Thank you for agreeing to take the time to talk today. This interview will take 45-50 minutes of your time. It is also voluntary, so you may choose to withdraw from the interview at any point for any reason. And please don’t hesitate to ask questions at any point.
Consent [1 min]
For the purposes of my study, I will be recording our conversation as well as any sketches I may ask you to do during this interview. I will anonymize your identity and will report the results only in the context of academic publications. Do I have your consent to use your data, and would you still like to participate in this study?
Warmup [3 min]
Thank you! To get things rolling, let’s start by going through a few questions:
•
What is your current occupation?
•
[If not currently SWE or in tech] Have you worked as a software engineer at a company within the last 10 years? Think back to when you did work as one.
•
If you’re comfortable sharing, may I ask what company you work at?
•
[If not] Would you be okay with sharing the approximate size of your company?
•
How long have you worked at your company?
•
Approximately how large is your direct team?
•
Have you ever worked with any privacy frameworks (such as Fair Info Practices, Privacy by Design, or Data Minimization) in your software designs?
•
[If not] How familiar are you with privacy frameworks?
Initial Brainstorm [10 min]
Have you heard of Zoom’s attendee attention tracking feature? It’s a feature on the Zoom video calling platform that isn’t active anymore, but basically if someone was sharing their screen in a group call, the hosts could see which participants weren’t actively on the presentation window for more than 30 seconds. Inactivity was marked by a gray timer icon next to their name on the participants list (visible only to the hosts). After the meeting, Zoom would generate a report listing the percentage of time each participant had Zoom in focus during the meeting, as well as how long they were in the meeting. [Show screenshots.]
I’d like you to first imagine you were a user in a meeting utilizing Zoom Attention. Can you anticipate any needs and concerns as a user?
How might these issues be addressed in a feature update?
How confident do you feel this product will satisfy user needs and concerns (on a scale of 1 being not at all confident to 5 being extremely confident)?
•
[Based on the ideas, ask for follow-up explanation, such as with an info flow, user journey storyboard, privacy storyboard, system architecture diagram, etc.]
•
[Also consider probing about what data is collected, data retention, how data is shared, how data is secured, user rights to data, user control of data, etc]
•
[Be prepared to show an example]
First Iterative Design [8 min]
Now I’d like you to act as a software developer for this feature on your product, Zoom. Taking the original Zoom Attention feature, let’s say you were provided the following request to update and change the feature:
[Choose one of the following 3 based on the random order chosen for the protocol for this participant - can paste into a separate doc and share so the participant can refer to these]
(1)
You hear this feedback sample from users:
-
“We wish Zoom would display a notification or let people on the call see whether it’s enabled.”
-
“People could be just doing other things and not have the meeting window up. It doesn’t mean they’re not listening.”
-
“Attendees should know how accountability and performance are potentially being ’graded’ along with the limitations of them.”
-
“So, my ’I’m actually watching this on a different screen’ won’t actually fly for much longer?”
(2)
You hear this feedback sample from users:
-
“That’s kind of intrusive in a way that often doesn’t happen in the physical space or in the physical world. Employers have always had incredible control over how employees spend their time, but the technology makes it faster, more invisible and more sophisticated.”
-
“If you have to track people to make sure they pay attention during the meeting, the meeting is pointless and too long. Meetings that are short and packed with useful info nobody wants to miss, are well-attended.”
-
“The ’attention-tracking’ feature […] was appropriate in some business contexts, but for many new consumers, it presented a privacy conflict.”
(3)
You hear this feedback sample from users:
-
“Everyone in the meeting room, the host, the speaker, and the attendees should be able to see who’s paying attention.”
-
“I think the issue is not that Zoom knows if its application window has focus, but that it *reports* focus state to anyone other than the user.”
-
“Android users could look at their notifications but not iOS users […] if you were on Android, you could bring down the Notifications tray for an unlimited amount of time since it didn’t fully cover the Zoom app like the iPhone’s one did.”
-
“The feature creates a scenario whereby you could be penalized by an employer for doing job-related things, such as checking your notes or updating a memo during an important meeting.”
Now I would like you to draw up a change to this feature that would address this request (and any other needs that you might think of). If you feel your previous idea is sufficient, please explain how it addresses this request.
•
[Based on the ideas, ask for follow-up explanation, such as with an info flow, user storyboard, privacy storyboard, system architecture diagram, etc.]
•
[Also consider probing about what data is collected, data retention, how data is shared, how data is secured, user rights to data, user control of data, etc]
How confident do you feel this product will satisfy user needs and concerns (on a scale of 1 being not at all confident to 5 being extremely confident)?
Follow-up Iterative Designs [8x2 min] (Repeat x2)
Now let’s say you as the developer also receive this feature request:
[Choose one of the 3 listed above based on the random order from the protocol for this participant - can paste into a separate doc and share so the participant can refer to these]
As before, please draw up any fixes or feature improvements you would add to address this request (and any other needs you might think of), on top of your previous ideas. If you feel your previous design is sufficient, please explain how it addresses this request.
How confident do you feel this product will satisfy user needs and concerns (on a scale of 1 being not at all confident to 5 being extremely confident)?
Closing [1 min]
That’s all the questions I had for you today! Is there anything else about your ideas or thoughts about anything we’ve talked about that you didn’t get to mention yet?
Thank you very much for your time! I’ll collect your information for paying out your incentive. If you would also like to stay in touch and see the potential results of this study, let me know and I’ll reach out once I have updates to share!