Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3613904.3642572acmconferencesArticle/Chapter ViewFull TextPublication PageschiConference Proceedingsconference-collections
research-article
Open access

Designing a Data-Driven Survey System: Leveraging Participants' Online Data to Personalize Surveys

Published: 11 May 2024 Publication History

Abstract

User surveys are essential to user-centered research in many fields, including human-computer interaction (HCI). Survey personalization—specifically, adapting questionnaires to the respondents’ profiles and experiences—can improve reliability and quality of responses. However, popular survey platforms lack usable mechanisms for seamlessly importing participants’ data from other systems. This paper explores the design of a data-driven survey system to fill this gap. First, we conducted formative research, including a literature review and a survey of researchers (N = 52), to understand researchers’ practices, experiences, needs, and interests in a data-driven survey system. Then, we designed and implemented a minimum viable product called Data-Driven Surveys (DDS), which enables including respondents’ data from online service accounts (Fitbit, Instagram, and GitHub) in survey questions, answers, and flow/logic on existing survey platforms (Qualtrics and SurveyMonkey). Our system is open source and can be extended to work with more online service accounts and survey platforms. It can enhance the survey research experience for both researchers and respondents. A demonstration video is available here: https://doi.org/10.17605/osf.io/vedbj

1 Introduction

User surveys are a fundamental tool in empirical research studies. They support user-centered research in many fields related to information technologies [85]. In human-computer interaction (HCI), surveys support understanding how and why users perceive and interact with (online) technologies.
Collecting rich and reliable survey data is essential to provide meaningful insights. Survey research methodology has evolved dramatically over the years. Online technologies superseded traditional methods of administering surveys (e.g., paper- and phone-based data collection) [23]. Online platforms such as Qualtrics, SurveyMonkey, LimeSurvey, and Google Forms enable researchers to design complex questionnaires, deploy them to respondents, and collect and analyze responses. One benefit of these platforms is easier personalization; specifically, by adapting questionnaires to respondents’ profiles and experiences. For instance, addressing respondents by name or customizing the survey layout [22, 23, 39, 60] tend to increase response rates. Personalization has mixed effects on data quality [39, 49]. However, several studies suggest that personalization can improve research quality, build trust, and influence the way respondents report sensitive information [36].
Survey personalization can be made more powerful by using richer information about participants’ lives, extending beyond simple customization to personalized questions and response options for each respondent. Online platforms, social media, Internet of Things (IoT) devices, and wearable technologies collect data that reflects users’ behavior. This data presents an opportunity for data-driven survey personalization. By accessing (with the respondent’s permission) select items from the respondent’s digital history, researchers can dynamically modify the survey, including screening, survey flow, display logic, skip logic, templated questions and answers, and even asking respondents questions about specific bits of their digital lives. This approach could increase respondents’ interest in participating, increase their engagement with the survey, mitigate response biases [21] (e.g., social desirability biases [4]), and ultimately improve data quality.
Data-driven personalization can enable transitioning from abstract to concrete inquiries. For example, general questions such as “What is your main driver when you have very active weeks?” can be replaced with specific questions like, “On the week of [April 12th], you engaged in [four] different activities and spent a total of [two] hours exercising. What was your main motivator during this exceptionally active period?” By adding granularity to survey questions, data-driven personalization can improve data quality.
Data-driven personalization also has benefits for recall accuracy. For example, surveys using methodologies inspired by the critical incident technique (CIT) [15, 31] rely on participants recalling past experiences accurately (e.g., [17, 76]). However, memory is imperfect, and cognitive biases can affect recall accuracy [56]. Data-driven surveys can mitigate this challenge by using participants’ digital history to present them with a concrete descriptions of past events (e.g., showing respondents their Facebook/Instagram posts and the most aggressive comment each post received) thus improving their recall accuracy. For example, Huguenin et al. [41] used an ah-hoc system based on LimeSurvey to show respondents their recent Foursquare check-ins and then asked questions about them.
Further, the benefits of data-driven personalization can be applied to research across many fields. For instance, social scientists could incorporate users’ Facebook/Instagram posts in their surveys to gain insights about their social media behavior; sports scientists could use Fitbit activities to ask athletes about their exercise habits, and HCI researchers could use GitHub in their surveys when studying software development practices.1
Despite the benefits of data-driven survey personalization,2 it comes with certain downsides. One downside is that it may bias samples to participants who are willing to share their data. This may make it harder to recruit large samples. Another downside is that implementing it with existing survey tools can be difficult. Existing tools lack usable mechanisms for seamlessly importing participants’ data from other systems in order to enable data-driven personalization. Instead, when researchers used respondents’ data to personalize survey questions (e.g., [27, 41, 82]) they had to resort to complex and resource-intensive methods such as customizing existing survey platforms and accessing respondent data manually (e.g., through data subject access requests made by respondents)3 or through ad-hoc API calls. Creating data-driven surveys remains cumbersome, which prevents unleashing the full potential of data-driven surveys in research.
In this paper, we explore the design of a data-driven survey system, starting with its desirability and usefulness for researchers (i.e., our potential users). Our paper consists of three parts: (i) a systematic survey of papers that include user surveys (N = 74) (see, Section 3.1), (ii) a survey of researchers who do user surveys (N = 52) (see, Section 3.2), and (iii) the design of our minimum viable product (MVP) platform: Data-Driven Surveys or DDS (see, Section 4). These three pillars allow us to: first, establish the need for DDS; second, identify key features and requirements for DDS; and finally, present the resultant design. The literature review informed us about several design implications. For example, we identified a potential for automating participant screening by using data from their online accounts instead of solely relying on self-reported data. The researcher survey underscored our belief that protecting respondents’ privacy should be a priority. Considering these findings, we designed and implemented DDS: an open-source MVP version of our system. DDS enables the inclusion of respondents’ data from online service platforms, such as Fitbit, Instagram, and GitHub, in the questions and answers of survey questionnaires and in the survey flow and logic. We provide a 6-minute demonstration video of DDS on OSF.4 With DDS, researchers can enhance a Qualtrics or SurveyMonkey survey, for instance, by automatically screening respondents whose Instagram account was created less than a year ago, by asking certain questions only to respondents who record yoga activities on Fitbit, by including the respondents’ total step counts in a question text, by asking respondents questions about a specific GitHub repository (e.g., a repository with at least 20 open issues), and by showing them the map and date of their running activity for better recollection. The statistics collected from the respondents’ Fitbit data can also simply be used to characterize the sample of respondents (slightly active, very active, etc.) and to identify correlations (e.g., with factor analysis) between their physical activity and their survey responses. DDS takes multiple steps to protect respondents’ privacy, such as using OAuth to access respondents’ accounts, by fetching only data that is needed for a given survey and only storing it on the survey platform, by informing respondents about the data that is collected and the way it is used, etc. For a more detailed overview of the privacy protection mechanisms, see Section 4.
DDS currently works with Qualtrics and SurveyMonkey, and can import data from Fitbit, Instagram, and GitHub. It is extensible to other survey platforms and online services (e.g., Spotify), thus ensuring data collection efficacy across various contexts.
Our research contributions are two-fold:
First, we present empirical insights gathered from extensive formative research, thus offering a comprehensive understanding of the challenges and opportunities associated with data-driven survey personalization. These insights inform DDS’ design and functionality, ensuring that it aligns with researchers’ diverse practices and needs across domains.
Second, we describe the design and implementation of our minimum viable product platform, DDS, and we showcase its capabilities through practical use cases.
This work introduces an innovative solution that has the potential to enhance survey research, particularly in the context of HCI. By simplifying the development of data-driven surveys, we provide both technical and non-technical researchers with a powerful tool—free and open-source—to enhance their survey designs. This innovation offers researchers new opportunities for in-depth exploration and provides respondents with more engaging and personalized interactions.

2 Related Work

In this section, we primarily review the existing literature and tools related to online toolkits and surveys. We also highlight the challenges that some researchers encounter when creating personalized surveys.

2.1 Existing Survey Tools

Several researchers have studied the user interface of online surveys. Genter et al. [34] compared drag-and-drop and numeric-entry options for survey ranking questions by using Qualtrics and analyzed responses regarding distribution patterns, response times, and challenges associated with each design. Ebert et al. [26] developed QButterfly to overcome integration challenges between survey tools and stimuli. QButterfly is an HCI toolkit that enables non-technical researchers to conduct online user interaction studies by using the Qualtrics and LimeSurvey platforms. QButterfly enables displaying stimulus web pages and recording clickstreams.
Several studies focused on improving researcher interaction with survey tools. Molnar [59] presented SMARTRIQS5, which enables real-time interaction within Qualtrics surveys, such as grouping respondents, assigning them conditions, and enabling them to chat with other respondents in the same group. Ajilore et al. [2] studied if using the local language, Pidgin English, and animated GIFs could increase online survey interactivity and engagement. They found that using GIFs and Pidgin English were perceived as highly interactive, and had specific benefits such as fun and persuasiveness. Finally, Rodrigues et al. [69] focused on analyzing survey data. They developed Lyzeli,6 which enables analyzing and correlating survey responses to address the error-proneness of survey-data analysis. Lyzeli provides features such as automatic question type identification, sentiment analysis, data filtering, word cloud visualization, and graphing.

2.2 The Use of Chatbots in Survey Research

In a different approach, several papers [68, 83, 86, 88] explored using chatbots as an alternative to web-based surveys. Wen and Colley [83] introduced a real-time moderator chat that prompts respondents to address unanswered questions or to provide clarifications. Xiao et al. [86] compared the results of an AI chatbot-driven survey with a traditional online survey in a field study with 600 participants. They found that the chatbot-driven approach elicited more relevant, specific, and clear responses compared to the Qualtrics survey. Participants engaged more with the chatbot, tended to provide more detailed and disclosing responses, and were willing to participate in future surveys. Conversely, Zarouali et al. [88] compared web-based and chatbot surveys for data collection. They found that respondents of web surveys often have more favorable response characteristics and data quality, as compared to chatbot surveys. Finally, Rhim et al. [68] explored humanization techniques in survey chatbots. They compared a humanized chatbot with features such as self-introduction and adaptive responses to a baseline chatbot. They found the humanized survey chatbot had increased positive perceptions, increased interaction time, and improved data quality.

2.3 Existing Instrumental Toolkits

Online behavioral research in cognitive psychology is growing, with a particular focus on reaction-time experiments that often demand advanced skills. Several studies proposed solutions for improving transparency, replicability, usability, and accessibility in experimental social sciences. Several studies [2, 5, 7, 61] focused on the accuracy of measuring reaction time in web experiments. Nikulchev et al. [61] presented a solution that minimizes the bias introduced by different devices and improves the precision of reaction time measurement. Balietti [7] developed nodeGame,7 a framework for conducting real-time synchronous experiments online or in a lab environment by using web browsers on a wide range of devices. Anwyl-Irvine et al. [5] also introduced the Gorilla Experiment Builder,8 which manages time-sensitive experiments across different participant groups, settings, and devices. Henninger et al. [40] introduced lab.js,9 an experiment builder for web-based data collection in both online and lab-based research. The platform provides a visual interface that does not require coding. Chen et al. [16] developed oTree10 for implementing interactive laboratory, online, and field experiments. Gureckis et al. [37] developed psiTurk11 to reduce technical barriers for conducting controlled online behavioral experiments on Amazon’s Mechanical Turk.
Finally, Ferreira et al. [29] developed a mobile instrumentation toolkit called AWARE. It utilizes smartphones’ built-in sensors to facilitate the acquisition, inference, and generation of context for data analysis. Such applications are usually limited to a specific device or hardware type and can drain device batteries.

2.4 Researchers’ Challenges in Data-Driven Survey Creation

Several studies used data-driven surveys to varying extents. Huguenin et al. [41] studied how the precision of a location check-in impacts its utility as perceived by the user who made it. The researchers modified the LimeSurvey platform to create personalized questions. First, respondents granted the survey system access to their Foursquare accounts. Then, they were screened based on their Foursquare data, and a number of check-ins were extracted. Finally, for each extracted check-in, a summary of the check-in was shown in question text along with four alternate versions that had different levels of information. The respondent then rated the utility of the four versions.
Epstein et al. [27] studied how to support fitness tracker users when they stop using their wearables. Using the Fitbit API and an ad-hoc solution, they asked respondents to grant access to their Fitbit accounts. In the survey, respondents were shown and asked questions about seven visual representations of their fitness-tracking data.
Bauer et al. [9] used a Facebook app to study changes in Facebook users’ privacy preferences over time. For each respondent, they randomly selected posts, collected their metadata, and included them in the survey. Ayalon and Toch [6] asked participants to comment on their privacy preferences for old Facebook posts. However, respondents were asked to select and report individual posts themselves. Anaraky et al. [3] developed a Facebook app to study how framing and default techniques influence users’ privacy decisions for automatic public tagging in their friends’ pictures. For each respondent they retrieved photos and tagging information, and used both in their survey questions. Johnson et al. [43] developed a Facebook app to study respondents’ relationships with their Facebook friends. Two other studies analyzed privacy and data-retention needs by using custom software to show respondents files and images from their e-mail and cloud-storage accounts [18, 45].
Wei et al. [82] used advertisement information from participants’ Twitter data in a survey about advertisement targeting mechanisms preferences. Respondents had to request all their Twitter data and send it to the researchers. This study demonstrates the value of including real, personal data when asking participants about their experiences and preferences.
These examples show that personalized surveys enable advanced research, but creating them is complex and resource-intensive. Researchers had to customize existing survey platforms, develop custom applications, and access respondents’ data through APIs with ad-hoc software tools. Although fruitful, these approaches can be technically challenging and costly. Some researchers relied on self-reports, which can also be time-consuming for participants to look up, be unreliable, and be error-prone. A data-driven survey tool could significantly mitigate these challenges.

3 Formative Research

To understand researchers’ practices when conducting survey research, we conducted two formative research [32] studies: (i) a literature review of papers with survey methodology and (ii) an online survey of researchers who conduct survey research.

3.1 Literature Review

One approach to understand researchers’ practices when they use a particular research methodology is to review their published papers. Therefore, we conducted a systematic literature review [47] of papers that used survey methodologies. Many HCI studies employ surveys. To narrow the scope and make this review more tractable, we focused specifically on fitness-tracking studies. We chose fitness tracking because it is an active research area with many studies, a large proportion of which employ surveys. Also, fitness tracking research presents a promising initial use case for our work due to the data they collect being inherently relevant to data-driven surveys.

3.1.1 Method.

For data collection, we first defined keywords12 and searched them in ACM DL, IEEE Xplore, AIS library, USENIX, Science Direct, and Springer Link. We also used Google Scholar to include papers from other databases and publishers (e.g., Taylor & Francis). After removing duplicates, we identified 689 papers. We excluded the papers that (i) were not written in English, (ii) were published in 2012 or earlier, and (iii) were not peer-reviewed. We included only those papers that were: (i) about fitness trackers and/or how fitness trackers are used, and (ii) conducted a survey with participants. The second author applied the inclusion and exclusion criteria and later confirmed them with the first author. Using the exclusion criteria and the first inclusion criterion, we selected 234 out of 689 papers.13 After applying the second inclusion criterion, we identified N = 74 papers that conducted user surveys. For data analysis, we employed reflexive thematic analysis (TA) [10, 11]—a method that encourages researchers to consider their own perspectives during data interpretation. Reflexive TA has recently been used in HCI, along with systematic reviews for data analysis [19]. This approach enabled us to code the papers from the perspective of data-driven surveys: identifying key themes in the textual data and examining the way these themes related to or reflected the use of or need for data-driven surveys. The first and second author collaboratively conducted the analysis. Initially, the first author reviewed the abstract and methods sections of the papers (and other sections if necessary) and coded the parts related to survey methodology and data collection. Subsequently, the first and second authors discussed the findings to identify codes, themes, and to refine the coding structure. To quantify these findings, we counted the number of papers that included relevant items.

3.1.2 Results.

Qualtrics was the most commonly used platform, reported in n = 13 papers. Unfortunately, n = 54 papers did not report which platform they used. Other reported platforms included Google Forms, Unipark, QuestionPro, REDCap, SoJump, and LimeSurvey, all mentioned rarely. In contrast, n = 32 papers reported the type of fitness tracker(s) studied in their research, with Fitbit being the most commonly reported (n = 32), followed by Garmin (n = 8) and Apple (n = 7). Other brands were reported fewer than 7 times.
Table 1:
PaperData Collection MethodPurpose of Data Collection
Epstein et al. [27]Fitbit APIto implement an ad-hoc data-driven survey
Zufferey et al. [90]Fitbit APIto infer participants’ personality traits
Orlosky et al. [62]Fitbit APIdid not specify
Dreher et al. [24]used a proprietary service (Fitabase*)to validate Fitbit usage
Stück et al. [75]asked members of a health campaign (AchieveMint)to analyze physical activity behavior
Shin [74]used a mobile app called “HealthExported for Fitbit to CSV”to analyze physical activity behavior
Dai et al. [20]did not specifyto validate Fitbit usage
Preusse et al. [66]downloaded manuallydid not specify
   
* See https://www.fitabase.com/, last accessed Feb. 2024.
Table 1: The relevance of the papers to data-driven surveys: eight papers used fitness data. Only Three used the Fitbit API for data collection and only one of them implemented an ad-hoc data-driven survey system.
Table 2:
   Research Purpose*How would a data-driven approach help?**
Freq.Survey GoalsExamplesPrimary
Screener
Time
Reliability
Quality
Personaliz.
n = 9Assessed device ownership, brand,
and usage patterns
[25, 67]---
n = 2Assessed respondent’s motivation
or behavior for physical activity
[63, 84]----
n = 3Assessed perceived usefulness or
value of data
[44, 80]----
n = 4Assessed willingness to share data
and data-sharing behavior
[33, 73]----
n = 6Assessed perceived data sensitivity
and privacy concerns
[53, 54]----
n = 4Presented threat scenarios to study
privacy-coping strategies
[8, 33]----
n = 2Tested data-sharing ideas using hypothetical
scenarios (e.g., mock-up interfaces)
[46, 81]----
n = 36Assessed device ownership, brand,
usage patterns, and using 3rd-party apps
[58, 79]---
n = 1Verified device ownership by sharing
a photo of the device
[89]---
n = 2Asked participants to bring sample
data to (follow-up) interviews
[38, 74]---
         
* These columns distinguish between papers where the survey was a primary instrument versus those where it was a screener tool.
** These columns show the potential benefits of a data-driven approach for surveys with different goals, including: (i) saving respondents’ time by omitting easily collectible questions (e.g., device brand), (ii) enhancing response reliability by preventing dishonest answers (e.g., false device ownership claims on Prolific), (iii) improving data quality by reducing reliance on self-reported information, and (iv) personalizing questions (e.g., customizing mock-up interfaces).
Table 2: Survey types that could have benefited from using data-driven surveys.
Table 1 summarizes the papers where the researchers requested participants’ data for further analysis. Out of n = 8 identified papers, three papers used the Fitbit API for data collection [27, 62, 90]. One of them used the data to implement an ad-hoc data-driven survey system [27] (as discussed in Section 2.4).
We identified n = 24 papers with survey questions that could have benefited from using data-driven surveys. Table 2 (top) summarizes the goals of the surveys in these papers and how data-driven surveys could have helped. We also identified n = 48 papers that used surveys as a screener tool for recruiting respondents. As highlighted in Table 2 (bottom), most of these screener questions could be measured using a fitness-tracking platform and thus could benefit from a data-driven survey platform.
Our literature survey unveiled the following insights into the potential and significance of data-driven surveys in the field of fitness tracker research and beyond:
Promising Potential for Fitness-Tracker Research. Whereas most existing studies have not directly incorporated data-driven surveys, there is a clear opportunity for data-driven approaches to improve the reliability and depth of insights derived from fitness-tracking studies. Note that our review of related works (see Section 2) also identified this potential in other research areas (e.g., privacy research [6, 9, 41]).
Advances in Survey Quality. Data-driven surveys could improve survey quality. They can transform abstract questions into concrete inquiries, directly collect data to save time and avoid errors (intentional or not) in respondent recall, and facilitate faster and more precise participant screening. Thus, data-driven surveys could (partially) bridge the gap between measurement studies and traditional survey methods.
Platform and Data Trends. Qualtrics was the most prevalent survey platform. Similarly, Fitbit stood out as the dominant source of fitness tracker data in this research domain. Therefore, it could be worthwhile to include these two platforms in a data-driven survey tool.

3.2 Online Survey

We conducted a survey, targeted at researchers who conduct surveys in their research, to explore whether a data-driven survey tool would be valuable for researchers in HCI and related disciplines. Throughout this section, we use the term “researcher” instead of “respondent” to avoid confusion between our respondents and generic survey respondents.
Table 3:
 n%
Gender  
Woman2650.0%
Man2344.2%
Non-binary11.9%
Prefer not to disclose23.8%
Age  
25-34 years1528.8%
35-44 years2446.2%
45-54 years713.5%
55-64 years35.8%
65+ years23.8%
Prefer not to disclose13.8%
Main Research Field  
Security and privacy (S& P)*2650.0%
HCI1936.5%
Information systems (IS)35.8%
Social/professional topics23.8%
 n%
Surveys Conducted  
(Last 5 Yrs)  
4+4382.7%
359.6%
235.8%
111.9%
Proportion of Research  
Using Surveys  
None00.0%
Very little11.9%
A little59.6%
About half1630.8%
A lot1121.1%
A great deal1630.8%
All35.8%
 n%
Proportion of the Used  
Survey Platforms  
Qualtrics4382.7%
Google Forms3669.2%
SurveyMonkey2751.9%
LimeSurvey1223.1%
Microsoft Forms23.8%
Typeform23.8%
Unipark23.8%
Alchemer11.9%
AWS11.9%
Checkbox Survey Solutions11.9%
EU Survey11.9%
QuestionPro11.9%
SharePoint11.9%
Slido11.9%
UserZoom11.9%
WJX11.9%
Made their own system23.8%
*Note that 96.2% of these researchers reported working in the sub-field of usable security and privacy.
Table 3: Researcher demographics, primary research fields, and survey experience.

3.2.1 Method.

The full text of the survey is available in Appendix A. The survey began with a consent form, followed by some background and screening questions about the researcher’s experience with online survey tools. Next, there were three sections describing possible features of a data-driven survey tool:
(1)
Conditioning survey flow or skip logic on extracted personal data, e.g., skip to Question 5 if the participant has made fewer than 200 posts on Facebook (a.k.a. “Survey Flow and Display/Skip Logic”);
(2)
Using extracted personal data to fill in variables for templated questions, e.g., “Your most active month (in terms of step count) this year was [month]. Please explain why.” (a.k.a. “Templated Questions and Answers”); and
(3)
Using extracted personal data to select example activities to ask questions about, e.g., select the most recent Facebook post with more than 200 likes, display it, and ask questions about it (a.k.a. “Custom Variables”).
For each of these three features, we asked researchers whether they had previously implemented similar functionality, whether they would find it useful, and how likely they would be to use it, followed by an open-ended question about scenarios where this feature might be useful. We then asked researchers to describe (in an open-ended way) any pain points that would be addressed or benefits they would derive from using the three features. Finally, in order to characterize our sample, we asked about age, gender, and specific research fields.14 We implemented the survey in Qualtrics. Before deployment, we did cognitive pretests with two colleagues. We used their feedback to adjust the phrasing of questions to improve clarity. The survey was designed to take less than 10 minutes, and, in practice, took 7 minutes and 25 seconds (median).
We recruited researchers by advertising within researcher networks, including slack channels (e.g., the SOUPS Slack channel), mailing lists (e.g., the UMD HCIL list and the SOUPS announcement list), and social media groups associated with conferences and communities (e.g., the CHI Meta group 3.1) to invite them to participate. Researchers were not compensated. The survey was approved by our university’s institutional review board (IRB).
For this formative research, we report on multiple-choice questions with descriptive statistics. We analyzed open-ended responses using an open-coding approach [71], where the second author performed the initial coding and then refined it in consultation with the entire team.

3.2.2 Results.

A total of 76 researchers started the survey, of which 55 completed it. Among them, N = 52 researchers had prior experience with survey tools; we report their answers here. A summary of their demographics, research areas, and experience with surveys is given in table 3. Quantitative results regarding the perceived usefulness of the three considered features of data-driven surveys, and self-reported likelihood to use them, are summarized in Figure 1.
Regarding the proposed Survey Flow and Display/Skip Logic feature, \(57.7\%\) of the researchers (n = 30) reported having experience with a similar mechanism. These were evenly divided between those who considered it easy to implement (n = 13, \(43.3\%\) ), and those who reported it as difficult (n = 13, \(43.3\%\) ).15 Among those who had not used this feature before, \(11.5\%\) (n = 6) reported they had not thought about such functionality, and \(5.8\%\) (n = 3) mentioned it was technically difficult to implement. A researcher mentioned: “I did use survey flows/skip logic and display logic, just not based on participant data from social networks, but rather data from inside the survey. That was complicated enough ;)” [4 or more, a lot, S&P].16 Most researchers found this feature useful and said they were likely to use it, if it was easy to implement (for details, see Figure 1). Researchers reported several potentially useful scenarios, e.g., configuring a survey based on the frequency of interaction with a particular technology. They mentioned this could greatly reduce the exhaustive list of questions they must ask beforehand: “In my recent research, I could configure survey logic based on people’s frequency of interaction with VR devices based on their daily use activity of VR devices.” [4 or more, about half, HCI]. Several researchers mentioned scenarios related to user behavior and preferences research, such as analyzing developers’ practices, users’ unconscious habits, and the correlation between social media behavior and privacy preferences. “I would find this very useful when surveying software developers about secure development practices. I could imagine skipping a question or changing the flow based on their commit history on GitHub.” [4 or more, a great deal, S&P]. A few researchers mentioned use cases in health research and for conducting experience-sampling and diary methods.
Figure 1:
Figure 1: Usefulness of a data-driven surveys system (top) and likelihood to use one, if it existed, (top) reported by the researchers (N=52). Breakdown per feature: Survey Flow/Logic (left), Templated Questions and Answers (middle), and Custom Variables (right).
Regarding the proposed Templated Questions and Answers feature, \(26.9\%\) of the researchers (n = 14) had experience using a similar approach. Half of those who had used this feature before found the implementation difficult (n = 7, \(50.0\%\) ). Among those who had not used it, \(21.2\%\) (n = 11) had not thought of it, and 11.5% (n = 6) mentioned technical difficulties. A researcher mentioned a challenge related to data accessibility and scalability: “I was interested in doing this with Spotify listening data; however, the API provides limited access. We instead had to rely on user-requested data dumps, which takes about a week per user and doesn’t scale well.” [4 or more, about half, HCI]. A strong majority found the proposed feature useful and many reported they would be likely to use it. Scenarios suggested for using this feature included customizing questions and answers based on technologies that users interacted with, such as using logged hours on Steam for gaming research or showing vignettes based on the social media platform users were using. One researcher mentioned the usefulness of customization in long-term studies: “On longitudinal surveys reminding a person how many times they had used an app or feature in the prior month when asking them how useful they feel the app/features is.” [3, a little, HCI]. Another researcher mentioned combining social media data with users’ prosocial behavior (e.g., about vaccination).
Regarding the proposed Custom Variables feature, only \(9.6\%\) of the researchers (n = 5) reported using a similar approach. Among those, two researchers found it difficult to implement. Of the remaining researchers who had never used it, many said they had never thought about it (n = 19, \(36.5\%\) ), and some pointed to technical barriers (n = 7, \(13.5\%\) ). More than half of researchers expected the feature to be useful, and around half considered it likely they would use it. Multiple usage scenarios were suggested, including evaluating the experiences and behaviors of software developers, studying user behavior in social media, and applying stratified sampling methods. “I could imagine using this to discuss specific pushes to GitHub or posts to StackOverflow to ask them about why they did that with this post or push, how they came up with that post or push, etc.” [4 or more, a great deal, S&P]. “Showing respondents a random selection of posts and asking them to answer a series of questions as it relates to that post.” [4 or more, about half, S&P].
Researchers identified three major advantages the proposed data-driven features could bring to their research. First was the potential to improve data quality. Researchers pointed out that data-driven surveys will bring precise data into the survey, avoid reliance on only self-reported data, and gain more in-depth insights. “From a research perspective, this would be a treasure trove of information!” [4 or more, a lot, S&P]. Two mentioned that this could significantly improve participants’ recall. “It would be nice to be able to ask about specific experiences the participant has had, without risking them misremembering the event.” [2, about half, S&P]. Second, they identified numerous research opportunities that could be realized with data-driven surveys, including asking novel research questions, conducting more complex studies with context-based questions in diverse formats, and collecting more specific and objective answers rather than generic and subjective ones. For example, data-driven surveys “would make complex ’if’-Statements a lot shorter and easier.” [4 or more, a great deal, S&P]. Third, researchers highlighted the potential benefits for respondents’ engagement and experience. They mentioned reducing respondents’ fatigue, keeping respondents engaged, and allowing them to think about specific events when answering questions. “[...] minimize the number of questions that participants need to complete, present questions only to suitable participants, ease participants burden of going through a lengthy questionnaire.” [4 or more, about half, HCI].
Researchers also identified some key drawbacks.17 Several expressed concerns about privacy and anonymity in using respondents’ data. Some were concerned such a system would lead to identification of the participants. “This data typically allows for unique identification of the individual, which is what we would like to avoid in surveys.” [4 or more, a lot, S&P]. This valid feedback is not surprising, given that a large proportion of our sample works in (usable) security and privacy. Concern for participant privacy informed our design (see Section 4.2). In particular, we considered transparency paramount and decided that our solution should be open-source and should communicate to respondents what data is collected about them. Many researchers (appropriately) take great care when asking participants for detailed or sensitive information. Previously, researchers have asked respondents to download a complete copy of their data (sometimes through a third-party application) and then to send it to them. These researchers had to develop extensive infrastructure to limit and/or clean the data they collected. With an automated approach, access to respondents’ data would be more fine-grained and controlled through using APIs’ permissions infrastructure, making it easier to take appropriate care. Nonetheless, with or without automated tools, it is incumbent upon researchers to think carefully about data collection and how to protect their participants; our design is intended to support researchers in doing so.
Relatedly, another researcher mentioned that proper consent collection is necessary before deploying such systems. Thus, a data-driven survey should include meaningful informed consent, where respondents can read and agree before granting access to their data. Some researchers felt such privacy issues could create a deployment challenge, as the study might be blocked by IRBs and ethics committees. “[...] it might pose a problem for IRB management if PII like social media accounts are linked on the same platform that holds the study data [...]” [3, about half, HCI]. We will further discuss privacy-enhancing strategies to ensure ethical data collection.
Finally, several researchers noted challenges with usability and learning curve. Some thought integrating data into existing survey platforms would be too complicated. “I need to figure out how to integrate different data sources by myself.” [4 or more, a great deal, HCI]. Our envisioned design would facilitate integrating data, simplifying the process, and ensuring that using external data from online services for personalization would be exactly the same as using internal data (i.e., responses to previous questions in the survey). The use of internal data is a very common practice among researchers, suggesting there would be limited learning required.
To conclude, our online user survey highlights the need for and interest in a data-driven survey tool and sheds light on the following points:
Feature Preference. While all three of the proposed features received positive feedback, the Survey Flow and Display/Skip Logic feature was favored as most useful and likely to be used.
Potential Use Cases and Benefits. Researchers envisioned various potential usage scenarios for data-driven survey tools, from customized data-driven questions in gaming research to studying developers’ commit behavior. Researchers considered data-driven survey tools beneficial for improving data quality, expanding research opportunities, and enhancing participant engagement.
Concerns. Privacy and anonymity emerged as a critical concern, highlighting the need for including privacy-enhancing mechanisms when designing data-driven survey tools (i.e., privacy-by-design).

4 Design and Implementation of DDS

In this section, we describe the design and implementation of our data-driven survey platform (named DDS) and our methodology to do so. An overview of DDS is given in Figure 2, and a demo video is available on OSF.The source code for DDS is hosted on GitHub.18
Figure 2:
Figure 2: Overview of the architecture and functioning of DDS. Researchers create a project on DDS and provide it with credentials (e.g., API key, OAuth token, Client ID, Client Secret) for managing surveys on the survey platform (SP) and apps on the selected data providers (DPs) (a.k.a. online services). DDS declares the data that can potentially be extracted from the DPs on the SP, where the researchers can use them while designing their survey. These actions are denoted as Steps 0 and are interleaved in time. To begin a data-driven survey, respondents follow a link to DDS (Step 1), where they are redirected to the required DPs (Step 2) in order to grant DDS access to their data. DDS downloads and processes their data from the DPs (Step 3), uploads the processed data to the SP (Step 4), then deletes the data from its memory. Finally, the respondents are redirected to the SP to take the data-driven survey (Step 5). Researchers and respondents use web browsers to interact with the different systems (i.e., SP, DDS, and DPs). The six screenshots at the bottom of the figure illustrate the different websites and web pages used by these users. These screenshots are provided as visual clues; readable versions of these web pages are available in other figures and in the demo videos. DDS currently supports Qualtrics and SurveyMonkey as SPs, as well as Fitbit, Instagram, and GitHub as DPs. Grayed-out icons represent other popular platforms and services that could be integrated in the future (Google Forms; Spotify).

4.1 Design Introduction

4.1.1 Design Goals.

Our goal is to design and implement an extensible and easy-to-use system that enables integrating survey participants’ data from online services into surveys. We do not aim to build a survey platform (SP). This system should enable researchers to integrate the three main techniques of Survey Flow and Display/Skip Logic, Templated Questions and Answers, and Custom Variables into their surveys. Given the positive feedback we collected during our formative research (see Section 3.2.2), we decided to consider these techniques the core functionalities of DDS.

4.1.2 Design Methodology.

We followed an iterative design process [65]. One author proposed an initial system design, which was then iterated on together by four authors. Two authors (henceforth, designers) started the UI design process by brainstorming functional requirements and interface sequences (i.e., user flows). They started by sketching the interface with pen and paper and gathered feedback from colleagues in our institution with extensive UX/UI design experience. After converging on an initial design, the designers switched to using Figma [30], which makes it easy to iterate on and build UX/UI designs. After converging on these initial UI and system designs, we began implementation.
To develop DDS we used an adapted Scrum approach, which is an agile project management system [48]. Two authors, henceforth the Development Team (DT), implemented DDS. Another author took on both roles of Product Owner (PO) (to ensure the prioritization of features) and Scrum Master (to ensure that the Scrum process is being followed). The DT would plan ‘sprints’ to implement several features based on the PO’s requests. Then, they implemented the features, while having daily meetings to discuss the progress and address any issues. New feature requests from the DT and PO were recorded through GitHub issues and project management. At the end of each week, the DT selected the next set of features based on priority. The DT would also actively add suggestions for new features.
During the design and implementation processes, all authors periodically brainstormed potential shortcomings or limitations of the existing design, allowing us to identify and address issues quickly and preemptively. For example, we anticipated that respondents would be hesitant to share their data. Hence we decided to take a privacy-by-design approach and recruited security and privacy researchers for our researcher survey. This resulted in us implementing the transparency table described in Section 4.2.

4.1.3 Choice of Integrated Platforms and Services.

We integrated our minimum viable product implementation with Qualtrics and SurveyMonkey, as we found them to be the most commonly used SPs (see section 3). We chose Fitbit, Instagram, and GitHub as initial data providers (DPs). Our formative research (Section 3) indicated a good potential for data-driven surveys in fitness tracking research. We chose to integrate with Fitbit, as it was the most reported fitness tracker brand in the studies we reviewed. Instagram was chosen as an example to demonstrate that DDS can integrate with social media services, as researchers studying social media often use surveys to study user experiences, and social-media scenarios were commonly proposed in our formative survey. GitHub was chosen as many researchers in our researcher survey reported doing surveys with software developers.

4.1.4 Overview and Core Features of DDS.

DDS integrates with existing SPs and enables using all the features offered by the SP. DDS obtains participants’ data (with permission) from DPs and transfers it to the SP. The core features of DDS are: (i) checking whether a survey participant has an account on a given DP platform, (ii) providing built-in (i.e., predefined) variables for each DP and making them available on the SP, (iii) creating custom variables to select specific data using rules that researchers define. Variables provided by DDS enable conveniently collecting additional data and statistics about respondents, creating templated questions and answers (where placeholders are replaced with each respondent’s data), and configuring display and survey flow logic. For templated questions and answers, the variables can be inserted as piped text: “On average, you walk [steps.average] steps per day. Which of the following do you think contributes the most to you staying motivated for this?” For display and survey-flow logic, the variables can be compared with reference values: “Show this question/answer/branch only if account.creation_date <= 2020-01-01.” Custom variables enable deeper personalization by providing real and personal examples, instead of hypotheticals, to ask about categories of events or items. For instance, when asking about privacy concerns related to sharing location data, researchers could use DDS to show a map of the participant’s most recent run. This would help participants think concretely about their actual data, instead of an imagined abstraction. Additional examples are provided in the survey transcript in section A.

4.1.5 Distributing DDS Surveys.

Data-driven surveys powered by DDS do not need personalized invitation links. After a participant gives access to the required data-provider platforms, a unique identifier is formed based on their account IDs and then linked to a unique distribution link on the SP. Researchers can set a policy on whether the same combination of data-provider accounts must be used to resume the survey and/or whether previously used accounts can be used to take the survey again (e.g., whether a respondent can take the survey once with Fitbit account F1 and Instagram account I1, and then again later with Fitbit account F1 and Instagram account I2).

4.2 Privacy Considerations

We designed DDS with a careful focus on participants’ privacy, captured in the data-privacy policy. We aimed to address the points brought up in Section 3.2.2. Here, we present the key points.
Respondent Informed Consent. The landing page that a respondent sees provides several features that support giving informed consent for sharing their data. First, it informs participants that they will need to grant access to their data. Second, it provides a link to the privacy policy explaining data collection. Third, the survey distribution page displays a table that summarizes the exact data that will be extracted for each variable (see Figure 9a). For each variable, the table shows the DP, variable type, a textual variable description, and a link to the official API documentation.
Data Minimization. Requests are made via OAuth,19 which allows DDS to access the respondent’s data without needing their username and password, only their authorization. Figure 9b shows an example of this interface for Fitbit. If the DP offers different classes of data access, DDS will only request access to classes that are used in the researcher’s data-driven survey. DDS only downloads data needed to compute the variables that a survey requires.
No Data Retention. The downloaded data is not saved to disk; instead, it is stored temporarily in the system memory during variable computation. Once the calculated variables are uploaded to the SP, the raw data and variables are deleted from DDS’s memory, and the access tokens for the DP(s) are revoked. The uploaded variables are saved on the SP to be used in the survey when the respondent takes it. They also provide context for researchers (who would otherwise see only the templated questions and answers), and enable exporting of the data through DDS. When exporting survey responses through DDS, responses are downloaded to the researchers’ computer, then placeholder values are replaced by variable values in the data. DDS does not store any of the final survey responses; only the SP does this.

4.3 Architecture

DDS contains three main components: a back-end API, a front-end (web) app, and a database. All three components are proxied using an Nginx web server. The database stores researcher registration information, project configurations, and hashed participant-DP IDs.20 The back-end API provides access to the database, obtains data from DPs through their APIs, and communicates with the SPs’ APIs in order to upload to SPs the list of available variables (for use in setting up piped text) and to upload participants’ computed variable values for taking surveys. We implemented the back-end API in Flask [64] and Python 3 [70]. For the front end, we created a React [57] web app that both researchers and respondents can use. It makes API calls to the back-end API.
The platform is configured to be conveniently deployed using Docker containers [42]. It can also be deployed directly through GitHub. A researcher would need to fork our repository and configure a few deployment parameters. The repository is documented and contains step-by-step deployment instructions. For a detailed architecture diagram, see Figure 11 in Appendix B.
Figure 3:
Figure 3: Screenshot of the UI for creating a new project from scratch or from an existing survey (DDS website).
Figure 4:
Figure 4: Screenshot of the UI for managing projects. To connect to their account on the survey platform (Qualtrics here), the researcher only needs to copy some text from their Qualtrics account to DDS. (DDS website)

4.4 Researcher Flow

Figure 5:
Figure 5: Screenshot of the UI for registering a data provider (DP) app and then adding the DP on DDS.
Figure 6:
Figure 6: Screenshot of the UI for adding custom variables (DDS website). The researcher specifies a name for the custom variable, a source data provider, a category of data, and a number of filters based on attribute values and a selection strategy (random or maximum/minimum value for an attribute).
Figure 7:
Figure 7: Screenshot of the UI of what a custom variable looks like in the variables table (DDS website). A custom variable is composed of sub-variables, that can be enabled individually, that correspond to the attributes of the custom variable’s category. The custom variable can be edited or deleted using the ‘edit’ and ‘delete’ buttons in the ‘actions’ column of the table.
Figure 8:
Figure 8: Screenshot of using DDS variables to create a data-driven survey (survey platform website, Qualtrics here). The variables imported from DDS appear in the interface of the survey platform and can be used by the researcher for survey flow and display/skip logic, as well as for question-and-answer texts.
The researcher flow begins with a researcher creating a new project on DDS (shown in Figure 3). They can either create a new project from scratch (Figure 3a), which would create a new survey on their chosen SP, or they can link an existing survey from an SP (Figure 3b). To create a new project from scratch, they must name their project (the survey created on the SP will have the same name) and provide the information required by the chosen SP (e.g., the platform API key for Qualtrics). Creating from an existing survey requires providing a new name, the source survey’s ID, and the information required to interact with the platform. DDS provides SP-specific instructions for obtaining the required information.
Once the DDS project is created, the researcher can manage it by adding DPs, enabling/disabling variables, changing test values, creating custom variables, syncing the variables, and testing the survey. We illustrate these steps with a running example using Fitbit.
To add a DP (Fitbit in this example), the researcher must register an app on the DP’s website using a short web form. The registered app will be used to communicate with a respondent’s account and to extract their data. Figure 5a shows an example of registering a Fitbit app. When a researcher clicks on ‘+ data provider’ (shown in Figure 5b), DDS provides most of the required information for the app registration form. Further, DDS also provides links to tutorials for registering the app and a link to the app creation page.21 Next, the researcher must add the DP on DDS (shown in Figure 5b) by selecting it from a list of available options and configuring parameters such as a client ID and secret. DDS provides instructions for where to find the required parameters.
After adding the DP, a number of built-in variables are made available in the DDS project (shown in Figure 4). Each variable has an implicit ‘exists’ version ([variable].exists), which will be True if a variable was calculated successfully and False otherwise. These can be used, for example, to avoid showing respondents questions or applying logic when there is no data. The researcher can select which variables to enable for the current project; for example, in Figure 4, the researcher enabled account creation date (from the “account” category) and activities by frequency (from the “activities” category). These variables define (a) which data DDS will request from the DP, and (b) which variables are available for use on the SP.
In addition to these built-in variables, the researcher can add custom variables that enable selecting specific items from the participant’s account for use in the survey, according to researcher-defined selection rules. Figure 6 shows a researcher creating a custom variable to select one specific running activity from the participant’s Fitbit data. In this example, Fitbit is the selected DP and activities is the selected data category to draw from. Filtering rules were added to select only the activities that lasted at least 3600 seconds, occurred on or after January 1, 2022, and have the activity type ‘Run.’ Finally, the researcher must choose a selection criterion to select one single item from the list of items that pass the filter. Options include choosing the item with the maximum or minimum value of some attribute, or choosing a random qualifying item. Here, the activity date is used to select the newest qualifying running activity. After a custom variable is defined, it is available in the main DDS variables list (Figure 7), where it can be enabled for use. A custom variable contains sub-variables (e.g., date, duration, type for a Fitbit activity), that can be enabled individually and used in a survey (just like built-in variables). To enable a sub-variable, the custom variable as a whole must be enabled. Each sub-variable corresponds to an attribute of the selected custom variable item. A researcher can edit the rules used to create the custom variable or delete it using the ‘edit’ and ‘delete’ buttons in the ‘actions’ column of the table. Test values can be assigned to each sub-variable. Only enabled sub-variables are uploaded to SPs.
After enabling all the desired variables (built-in and/or custom), the researcher presses the “sync variables” button. This ensures that the SP knows about all the available variables. If the researcher wants to change the enabled variables—to add an additional variable, or to remove one that is not needed—they can simply sync the variables again to ensure the SP is up to date.
Synced variables are visible on the SP for use in logic and question design. At this point, the researcher can design the survey as normal, using any features provided by the SP and incorporating DDS variables as needed. An example of a question using these variables is shown in Figure 8.
Once the researcher finishes designing their survey, they can preview it using the “preview survey” button in the DDS project view (Figure 4). This will create a survey (via the SP) in which each personalized value will be set to the associated “test value,” as assigned by the researcher during variable selection. In Figure 4, the test value for an account creation date was set for January 1st, 2020.
Once data collection is complete, the researcher can export the results via DDS (the “download data” button in Figure 4). The downloaded results will mirror a standard export of data from the SP, but any placeholders in questions or response options will be filled in with the actual personalized data per respondent. For a particular respondent in our running example, the downloaded data will list Q1 from Figure 8 as e.g., 7028 steps, rather than showing the variable placeholder. In this way, the researcher can always reconstruct exactly what each respondent saw when taking the survey. Note that, as discussed above, DDS does not store participant data after uploading it to the SP; DDS instead uses the respondent data stored in the SP to generate the annotated export.

4.5 Participant Flow

Figure 9:
Figure 9: Screenshot of the UI for participants taking the survey.
Figure 10:
Figure 10: Screenshot of what a participant would see when they take the survey (survey platform website, Qualtrics here). This is the end-result of Figure 8.
Survey participants receive an invitation link directing them to DDS. Participants are presented with an authorization interface asking them to log into each DP and grant access to their data, as described in the comprehensive table (shown in Figure 9a). Figure 9b 9c). Figure 10 shows an example of how the templated question from Figure 8 would look to a participant: the variables have been filled in with personalized values drawn from the participant’s Fitbit account.

4.6 Extensions and Forward Compatibility

DDS was designed and implemented to be extensible. It can be integrated with SPs and DPs that offer APIs. SPs that offer APIs similar to Qualtrics (single global API key) and SurveyMonkey (fine-grained authorization via OAuth) should be easy to integrate. In contrast, it could require significant effort to integrate SPs with no API that instead offer remote control via JSON-RPC (such as LimeSurvey). Similarly, integrating Google Forms would require workarounds, such as using addons (e.g., Dynamic Fields Add-on for Google Forms) to allow piping data from another source (e.g., from Google Sheets). Additional DPs could be integrated, as many online platforms offer APIs with OAuth support.
Regarding forward compatibility, drastic changes to SPs or DPs could prevent DDS from working. We do not expect major issues, as we rely on public web APIs, which tend to be fairly static or at least backward compatible, to access basic functions of SPs and to extract respondents’ data from DPs. These public APIs are unlikely to change in a way that would prevent DDS from functioning [87].22 However, if a public web API were to be closed altogether, nothing could be done as a workaround.
With regard to continued development, a road map of features and functionality that we plan to implement is available on GitHub.23 We hope that DDS will gain community support in the longer term to maintain the platform and continue adding SPs and DPs.

5 Discussion

5.1 Contribution

In this paper, we make two key contributions to the HCI research community: first, we conducted formative research (a literature survey and a user survey of researchers) to characterize when and why researchers might find data-driven surveys useful. We found many ways that data-driven surveys can be beneficial, including by improving accuracy, reducing the burden on participants to recall events, avoiding survey questions in favor of measurements, and by providing concrete and personal examples rather than hypothetical ones when asking participants about their perceptions and preferences. Currently, implementing this kind of data-driven survey is an ad-hoc process, often requiring technical expertise for developing custom tools. Our formative research underscores the importance of carefully managing participants’ privacy and of obtaining meaningful consent when accessing their data from other platforms as part of a survey. These privacy aspects are not specific to our solution but to data-driven surveys in general, including those conducted in the related works.
Second, we designed and implemented DDS, a minimum viable product tool that researchers can use to easily implement data-driven surveys, regardless of their technical expertise. DDS currently integrates with Qualtrics and SurveyMonkey as survey platforms (SPs), as well as Fitbit, Instagram, and GitHub as data providers (DPs). DDS enables using respondents’ data from DPs in survey flow logic, templated questions and answers. DDS also enables selecting individual items (e.g., a fitness activity or an Instagram post) based on researcher defined rules. DDS is designed to limit the data that is requested from DPs, to limit the storage of the obtained data; and permissions are revoked immediately after the necessary data is obtained. DDS is open-source and extensible, and we hope that other researchers will make use of it to enhance their surveys.

5.2 Limitations and Future Work

5.2.1 Limitations.

Our work has several limitations. First, our literature survey was limited to papers relating to fitness trackers, only a small slice of HCI overall (and survey research more broadly). As such, we may not have learned about all possible use cases for data-driven surveys. However, we believe that we found enough evidence of their potential utility to motivate developing a tool. Our researcher survey was also relatively small (N = 52) and did not reach a representative sample of HCI researchers; as with the literature survey, we nonetheless found the results sufficiently encouraging to develop DDS.
Most importantly, though data-driven surveys can provide many important benefits for research, they do ask participants to provide access to a potentially large amount of personal data, which could be uncomfortable. Requiring participants to log into specific accounts and share data could reduce participation and bias samples toward only those willing to share. As such, it is critical that researchers using this approach apply best practices to deserve and maintain participant trust: limiting requests to DPs’ APIs to the minimum required to achieve their research goals; being very clear from the outset about the information participants will be asked to share and the way it will be used; obtaining meaningful consent; and using best practices for secure data storage.
The use of data-driven surveys may have unintended consequences. First, regarding security and privacy, DDS could facilitate malicious actors posing as researchers extracting data from people.24 Second, issues related to survey design, such as priming, could become pronounced with data-driven surveys. For example, asking questions where participants are shown routes of their runs, and then asking about privacy concerns may prime them to be more privacy conscious. Hence, researchers should take great care when designing data-driven surveys.

5.2.2 Future Work.

Currently, DDS is limited to two SPs and three DPs with only a few built-in variables. Nonetheless, we believe it is sufficient to demonstrate the feasibility of the approach, illustrate the underlying concepts, and showcase its potential. We will continue developing DDS, hopefully with the input and contribution of interested researchers and developers. In future work, we hope to extend this to other DPs, such as Spotify, Slack (for surveys of workers in different companies or industries), other social media platforms, and other data sources with useful APIs. This extension does not present significant technical difficulties, as many online platforms have a similar flow to use their APIs, including using OAuth for authentication.
Another extension to DDS could be support for the experience sampling method (ESM) [50, 77] that captures participants’ experiences and behaviors in their natural context. ESM can lead to survey fatigue when participants must respond at set intervals, especially if there are multiple questions. Previous studies showed that simple personalization in the ESM method (e.g., selecting a time slot for reporting [55]) or the use of event-contingent notifications (e.g., following a smartphone unlock event [78]) could lead to higher response rates and recall accuracy. An extended version of DDS could trigger surveys based on participants’ real-time data (as obtained from DPs’ APIs), such as only after certain exercise events. A data-driven approach could reduce the questions per ESM survey by measuring activity rather than requiring self-reporting.
In order to tackle the ethical and privacy issues raised by data-driven surveys,25 we intend to explore the design of privacy-preserving solutions. Such solutions could include integrated computational data obfuscation techniques to protect respondents’ privacy (rounding numeric variables, redacting sensitive text, blurring parts of photos, etc.) with respect to the researchers and limit the risks of re-identification that would use the values of the exported variables as pseudo-identifiers.
Finally, we plan to evaluate DDS from the researchers’ perspective. Drawing on suggestions from Ledo et al. [52] on evaluating toolkits in HCI, we plan to conduct a usability study [51], in which we ask researchers to develop mock surveys, to measure DDS’s usability using established usability metrics such as the System Usability Scale (SUS) [12], as well as efficiency, learnability, accuracy, and satisfaction. Findings from such a study could suggest ways to improve DDS. We also anticipate reporting on case studies, both of our own usage and from other researchers. Furthermore, we intend to study respondents’ acceptability and privacy concerns regarding data-driven surveys. Due to aforementioned privacy issues and general limitations of data-driven surveys, understanding how respondents perceive them would offer valuable insights.

6 Conclusion

We conducted formative research by using literature and researcher surveys, and we developed DDS, an open-source platform for a streamlined and simple way to create data-driven surveys. It currently integrates with Qualtrics and SurveyMonkey, and supports importing data from Fitbit, Instagram, and GitHub. DDS enables both technically-savvy and non-savvy users to create data-driven surveys without requiring any programming. We believe that the thoughtful use of data-driven surveys can improve the quality of user survey results and experiences, and that this can enable the creation of exciting new opportunities for user-centered research.

Acknowledgments

This work was partially funded by the Swiss National Science Foundation with Grant #200021_178978 (PrivateLife). We thank Holly Cogliati for proofreading this article. We thank Lahari Goswami and James Tyler for participating in the researcher survey cognitive pretests. We thank Carmela Troncoso and the School of Computer and Communication Sciences at École Polytechnique Fédérale de Lausanne for supporting Michelle Mazurek’s visit to Switzerland, which contributed to this collaboration. We thank the anonymous researchers who participated in our online survey and shared their valuable insights and experiences.

A Survey Transcript

Table 4:
Survey sectionsQuestion numbers
 Q1
Sec. 2 Research Field Q2, Q3
Sec. 3 Screening Q4, Q5, Q6
Sec. 4 Main - Survey Flow and Skip Logic Q7, Q8, Q9, Q10, Q11, Q12, Q13
Sec. 5 Main - Questions and Answers Q14, Q15, Q16, Q17, Q18, Q19, Q20
Sec. 6 Main - Custom Variables Q21, Q22, Q23, Q24, Q25, Q26, Q27
Sec. 7 Main - Open Q28
Sec. 8 Background Q29, Q30
Sec. 9 Follow-up Q31
  
You can click on section numbers or question numbers to jump to the associated section or question.
Table 4: Survey guide.
Note: Coding rules are colored in gray (not visible to respondents)
Sec. 1. Consent
Q1. STUDY
You are invited to participate in a study exploring the potential for developing new survey design tools. We greatly appreciate your participation. This study is conducted and financed by the Information Security and Privacy Lab headed by Prof. Kévin Huguenin of HEC Lausanne, University of Lausanne (UNIL).
STUDY PROCEDURE
If you agree to participate, we ask you to answer this online survey with about 20 questions. You will need about 7 minutes to complete the survey.
If you decide to participate in the study, we ask you to commit seriously to it.
PARTICIPATION CRITERIA
To be eligible for this study you must confirm that you exactly match the following conditions of participation in this study: being a researcher with experience in using online user surveys for your research and having the experience with using at least one of the popular online survey platforms (e.g., Qualtrics, SurveyMonkey, Lime Survey, Google Forms, etc.).
CONFIDENTIALITY AND DATA PROCESSING
During this study, no personally identifiable information will be collected. We will only collect your age range and gender, not your names, dates of birth, and contact information. Only for participants interested in attending in a follow-up interview we will collect their e-mail address for further communication (optional).
All data will be stored on a secured server and only researchers participating in this study will have access to it. The results of this research study might be published in scientific journals or conferences. Any published information will be aggregated and/or anonymized.
YOUR RIGHTS
Your participation is completely voluntary and you are completely free to stop the survey at any time, without notice.
You may choose to terminate your participation in this study at any time and for any reason. In this case your data will be deleted.
REMUNERATION
At the end of the study, you will receive no compensation for your participation in this study.
CONSENT
If you wish to participate in this research study, please select the “Agree” option to continue. It will indicate that you are eligible for this study, that you will answer all questions truthfully, and that you consent that we use the collected data under the conditions stated above. If you select “Disagree” you will not participate in this research survey.
[End survey if ‘Disagree’ is selected.]
◯ Agree
◯ Disagree
Sec. 2. Research Field
Q2.
What is your main research field as, according to ACM’s classification?
(a)
General category [ ] (dropdown list)
(b)
Specific field [ ] (dropdown list [dependent on ’General category’ choice])
Q3.
What is your research field? Provide a brief description.
[Only show if ‘Does not apply’ was selected in Q2.]
[ ] (text)
Sec. 3. Screening
Q4.
How many online user surveys have you conducted in your research during the last 5 years? For example, for doing full survey research or for screening participants while recruiting people for studies.
[End survey if ‘0’ is selected.]
◯ 0
◯ 1
◯ 2
◯ 3
◯ 4 or more
Q5.
What proportion of your research projects use online user surveys?
For example, for doing full survey research or for screening participants while recruiting people for studies.
[End survey if ‘None at all’ is selected.]
◯ None at all
◯ Very little
◯ A little
◯ About half
◯ A lot
◯ A great deal
◯ All
Q6.
Which of the following online survey platforms have you previously used in your research?
[Show question only if ‘0’ was not selected in Q4 and ‘None at all‘ was not selected in Q5.]
[End survey if ‘None of the above’ is selected.]
[Choice order is random, except for ‘Other (please specify)’ and ‘None of the above’ always appear at the bottom.]
\(\Box\) Qualtrics
\(\Box\) LimeSurvey
\(\Box\) Google Forms
\(\Box\) SurveyMonkey
\(\Box\) Typeform
\(\Box\) 123FormBuilder
\(\Box\) Formstack
\(\Box\) Alchemer
\(\Box\) VerticalResponse
\(\Box\) Other (please specify): [ ] (text)
\(\Box\) None of the above
Sec. 4. Main - Survey Flow and Skip Logic
Q7.
Imagine that you could use survey respondents’ data, imported from other digital platforms like social networks and activity tracking services, to configure survey flows, skip logic, and display logic. For example:
“Skip to question Q5 if the respondent has made fewer than 200 posts on Facebook.”
“Display option O8 if the respondent ever recorded a ‘yoga’ activity on Fitbit within the last year.”
Q8.
Have you previously used/implemented similar functionality in your surveys?
◯ Yes, I have done this before
◯ No, I have not done this before
Q9.
How useful do you find this feature?
◯ Extremely useless
◯ Moderately useless
◯ Slightly useless
◯ Neither useful nor useless
◯ Slightly useful
◯ Moderately useful
◯ Extremely useful
Q10.
If such functionality was easy to use/implement, how likely would you be to use it?
◯ Extremely unlikely
◯ Moderately unlikely
◯ Slightly unlikely
◯ Neither likely nor unlikely
◯ Slightly likely
◯ Moderately likely
◯ Extremely likely
Q11.
Could you elaborate on a scenario where you would use this feature?
[ ] (text)
Q12.
How difficult was it to use/implement such functionality (i.e., configuring survey flows, skip logic, and display logic) in your surveys?
[Only show if ‘Yes, I have done this before’ is selected in Q8.]
◯ Extremely easy
◯ Moderately easy
◯ Slightly easy
◯ Neither easy nor difficult
◯ Slightly difficult
◯ Moderately difficult
◯ Extremely difficult
Q13.
Why have you not used/implemented such functionality (i.e., configuring survey flows, skip logic, and display logic)?
[Only show if ‘No, I have not done this before’ is selected in Q8.]
[Choice order is random, except for ‘Other, (please specify)’ and ‘Prefer not to answer’ always appear at the bottom.]
\(\Box\) I found it too (technically) demanding to implement it
\(\Box\) I did not think of this
\(\Box\) Other, please specify [ ] (text)
\(\Box\) Prefer not to answer
Sec. 5. Main - Questions and Answers
Q14.
Imagine that you could create templated questions, where placeholder variables would be replaced by respondents’ data, imported from other digital platforms like social networks and activity tracking services.
For example:
“You have done X live streams on Facebook. Why do you use Facebook for this instead of other platforms?”
“Your most active month (in terms of step count) is X. Please explain why.”
Q15.
Have you previously used/implemented similar functionality in your surveys?
◯ Yes, I have done this before
◯ No, I have not done this before
Q16.
How useful do you find it to be able to use survey respondents’ data (imported from other digital platforms like social networks and activity tracking services) to create templated questions and answers?
◯ Extremely useless
◯ Moderately useless
◯ Slightly useless
◯ Neither useful nor useless
◯ Slightly useful
◯ Moderately useful
◯ Extremely useful
Q17.
If such functionality was easy to use/implement, how likely do you think you would be to use survey respondents’ data (imported from other digital platforms like social networks and activity tracking services) to create templated questions and answers, if such functionality was available?
◯ Extremely unlikely
◯ Moderately unlikely
◯ Slightly unlikely
◯ Neither likely nor unlikely
◯ Slightly likely
◯ Moderately likely
◯ Extremely likely
Q18.
Could you elaborate on a scenario where such a feature would be useful to you?
[ ] (text)
Q19.
How difficult was it to use/implement such functionality (i.e., creating templated questions) in your surveys?
[Only show if ‘Yes, I have done this before’ is selected in Q15.]
◯ Extremely easy
◯ Moderately easy
◯ Slightly easy
◯ Neither easy nor difficult
◯ Slightly difficult
◯ Moderately difficult
◯ Extremely difficult
Q20.
Why have you not used/implemented such functionality (i.e., creating templated questions)?
[Only show if ‘No, I have not done this before’ is selected in Q15.]
[Choice order is random, except for ‘Other, (please specify)’ and ‘Prefer not to answer’ always appear at the bottom.]
\(\Box\) I found it too (technically) demanding to implement it
\(\Box\) I did not think of this
\(\Box\) Other, please specify [ ] (text)
\(\Box\) Prefer not to answer
Sec. 6. Main - Custom Variables
Q21.
Imagine variations of the previous features where you could ask about specific events or activities in a respondent’s history with a platform or service. For example, define ‘var’ as:
“the most recent Facebook post that has at least 200 likes.”
“the most recent running activity that took place during 2022.”
Then you can use them to personalize the survey. For example:
“On the var.date you made the following post on Facebook: var.content. Why...”
“On var.date you did the following run: var.map. Why...”
Q22.
Have you previously used/implemented similar functionality in your surveys?
◯ Yes, I have done this before
◯ No, I have not done this before
Q23.
How useful would do find it to be able to select specific events/posts/activities from survey respondents’ data (imported from other digital platforms like social networks and activity tracking services) to create templated questions and answers?
◯ Extremely useless
◯ Moderately useless
◯ Slightly useless
◯ Neither useful nor useless
◯ Slightly useful
◯ Moderately useful
◯ Extremely useful
Q24.
If such functionality was easy to use/implement, how likely do you think you would be to select specific events/posts/activities from survey respondents’ data (imported from other digital platforms like social networks and activity tracking services) to then create templated questions and answers, if such functionality was available?
◯ Extremely unlikely
◯ Moderately unlikely
◯ Slightly unlikely
◯ Neither likely nor unlikely
◯ Slightly likely
◯ Moderately likely
◯ Extremely likely
Q25.
Could you elaborate on a scenario where such a feature would be useful to you?
[ ] (text)
Q26.
How difficult was it to use/implement such functionality (i.e., asking about specific events or activities in a respondent’s history) in your surveys?
[Only show if ‘Yes, I have done this before’ is selected in Q22.]
◯ Extremely easy
◯ Moderately easy
◯ Slightly easy
◯ Neither easy nor difficult
◯ Slightly difficult
◯ Moderately difficult
◯ Extremely difficult
Q27.
Why have you not used/implemented such functionality (i.e., asking about specific events or activities in a respondent’s history)?
[Only show if ‘No, I have not done this before’ is selected in Q22.]
[Choice order is random, except for ‘Other, (please specify)’ and ‘Prefer not to answer’ always appear at the bottom.]
\(\Box\) I found it too (technically) demanding to implement it
\(\Box\) I did not think of this
\(\Box\) Other, please specify [ ] (text)
\(\Box\) Prefer not to answer
Sec. 7. Main - Open
Q28.
Based on the three aforementioned features, could you briefly describe any (1) pain points that you have and think could be addressed or (2) benefits could be gained from using survey respondents’ data (imported from other digital platforms like social networks and activity tracking services)?
[ ] (text)
Sec. 8. Background
[Question order is random in this section.]
Q29.
How old are you?
◯ Under 18
◯ 18 - 24
◯ 25 - 34
◯ 35 - 44
◯ 45 - 54
◯ 55 - 64
◯ 65+
◯ Prefer not to disclose
Q30.
What is your gender?
\(\Box\) Woman
\(\Box\) Man
\(\Box\) Non-binary
\(\Box\) Prefer to self-describe [ ] (text)
\(\Box\) Prefer not to disclose
Sec. 9. Follow-up
Q31.
We may conduct a short follow-up survey or interview. If you would be interested in participating, please enter your e-mail below:
[Only show if ‘Qualtrics’ was selected in Q6.]
[ ] (text)

B Detailed Architecture

Figure 11:
Figure 11: Detailed architecture of DDS.

Footnotes

1
Note that data obtained from social media platforms represents a curated projection of one’s life. Similarly, data from platforms such as Fitbit and GitHub represents a projection, as someone may engage in an activity without using an associated online service (e.g., exercising without wearing their Fitbit tracker). It is important to acknowledge that data collected from online services such as Fitbit may not be perfectly accurate [28], but should be sufficiently accurate to provide useful insights [35].
2
Data-driven questions cannot and should not replace traditional, more general questions. Rather, they can enrich the research process with detailed contextual insights.
3
See, for instance, https://help.instagram.com/181231772500920,last accessed Feb. 2024.
5
See https://smartriqs.com/, last accessed Feb. 2024.
6
See https://arieslab.github.io/lyzeli/#/, last accessed Feb. 2024.
7
See https://nodegame.org/, last accessed Feb. 2024.
8
See https://gorilla.sc/product/tools/, last accessed Feb. 2024.
9
See https://lab.js.org/,last accessed Feb. 2024.
10
See http://www.otree.org/,last accessed Feb. 2024.
11
See https://psiturk.org/, last accessed Feb. 2024.
12
We used the following search strings: [“physical activity data” OR “physical activity tracker” OR “fitness data” OR “fitness tracker” OR “wearable activity tracker” OR “fitness tracking” OR “wearable activity tracking”] AND [“utility” OR “privacy” OR “security” OR “perception” OR “understanding” OR “experience” OR “expectation” OR “sharing”] AND [“system” OR “device” OR “application” OR “app” OR “service” OR “bracelet” OR “wrist-worn”].
13
The keywords and exclusion criteria partially overlap with those used in a survey paper on the utility, privacy, and security of wearable activity trackers [72]. Several of this work’s authors are co-authors of the survey paper.
14
Using the ACM Computing Classification System (CCS). See https://dl.acm.org/ccs,last accessed Feb. 2024.
15
Note: Four researchers reported it being neither easy nor difficult.
16
The content inside the bracket indicates the researchers’ background as follows: the number of surveys in the last five years, the proportion of research projects with user surveys, and their primary research area according to ACM’s CCS.
17
While we asked researchers about the pain points in their research, their responses also covered the challenges of future data-driven surveys.
19
See https://oauth.net/2/, last accessed Feb. 2024.
20
To enable us to retrieve the same unique survey distribution URL if a given project is not supposed to enable the same participant to take the survey more than once.
21
For DPs that support this, DDS will pre-fill parts of the form by including information from the project in the URL.
22
For example, Qualtrics has introduced few breaking changes to their API, and none of them removed the core functions that we use [1].
24
Note that without DDS this is already an issue. For example, see the Cambridge Analytica scandal [14].
25
Buchanan and Hvizdak [13] studied the prevalence of online user surveys (in general, not necessarily data-driven ones) in academic research and the ethical and methodological challenges they pose to universities’ human research ethics committees (HREC), based on an analysis of 750 HRECs in the United States.

Supplemental Material

MP4 File - Video Preview
Video Preview
MP4 File - Video Presentation
Video Presentation

References

[1]
2023. List of Breaking Changes. https://api.qualtrics.com/wsoddnhvf5mhv-list-of-breaking-changes
[2]
Oluwatoyin Ajilore, Lauretta Eloho Malaka, Aderonke Busayo Sakpere, and Ayomiposi Grace Oluwadebi. 2021. Interactive Survey Design Using Pidgin and GIFS. In Proc. of the 3rd African Human-Computer Interaction Conference: Inclusiveness and Empowerment(AfriCHI 2021). Association for Computing Machinery, Maputo, Mozambique, 52–64. https://doi.org/10.1145/3448696.3448701
[3]
Reza Anaraky, Tahereh Nabizadeh, Bart Knijnenburg, and Marten Risius. 2018. Reducing Default and Framing Effects in Privacy Decision-Making. In SIGHCI 2018 Proc. Assoc. for Info. Sys. (AIS). Atlanta, GA, USA. https://aisel.aisnet.org/sighci2018/19
[4]
Judd Antin and Aaron Shaw. 2012. Social desirability bias and self-reports of motivation: a study of amazon mechanical turk in the US and India. In Proc. of the CHI Conference on Human Factors in Computing Systems(CHI ’12). Association for Computing Machinery, Austin Texas USA, 2925–2934. https://doi.org/10.1145/2207676.2208699
[5]
Alexander L. Anwyl-Irvine, Jessica Massonnié, Adam Flitton, Natasha Kirkham, and Jo K. Evershed. 2020. Gorilla in our midst: An online behavioral experiment builder. Behavior Research Methods 52, 1 (Feb. 2020), 388–407. https://doi.org/10.3758/s13428-019-01237-x
[6]
Oshrat Ayalon and Eran Toch. 2013. Retrospective privacy: managing longitudinal privacy in online social networks. In Proceedings of the Symposium on Usable Privacy and Security. Association for Computing Machinery, Newcastle, United Kingdom, 1–13. https://doi.org/10.1145/2501604.2501608
[7]
Stefano Balietti. 2017. nodeGame: Real-time, synchronous, online experiments in the browser. Behavior Research Methods 49, 5 (Oct. 2017), 1696–1715. https://doi.org/10.3758/s13428-016-0824-z
[8]
Krutheeka Baskaran, Saji K. Mathew, and Vijayan Sugumaran. 2020. Are You Coping or Copping Out? Wearable Users’ Information Privacy Perspective. In AMCIS 2020 Proceedings. Association for Information Systems. https://aisel.aisnet.org/amcis2020/info_security_privacy/info_security_privacy/8
[9]
Lujo Bauer, Lorrie Faith Cranor, Saranga Komanduri, Michelle L. Mazurek, Michael K. Reiter, Manya Sleeper, and Blase Ur. 2013. The post anachronism: the temporal dimension of facebook privacy. In Proceedings of the 12th ACM workshop on Workshop on privacy in the electronic society(WPES ’13). Association for Computing Machinery, New York, NY, USA, 1–12. https://doi.org/10.1145/2517840.2517859
[10]
Virginia Braun and Victoria Clarke. 2006. Using thematic analysis in psychology. Qualitative Research in Psychology 3, 2 (Jan. 2006), 77–101. https://doi.org/10.1191/1478088706qp063oa
[11]
Virginia Braun and Victoria Clarke. 2021. One size fits all? What counts as quality practice in (reflexive) thematic analysis?Qualitative Research in Psychology 18, 3 (July 2021), 328–352. https://doi.org/10.1080/14780887.2020.1769238
[12]
John Brooke. 1996. SUS: A ’Quick and Dirty’ Usability Scale. In Usability Evaluation In Industry (1st edition ed.). Vol. 189. CRC Press, 189–194.
[13]
Elizabeth A. Buchanan and Erin E. Hvizdak. 2009. Online Survey Tools: Ethical and Methodological Concerns of Human Research Ethics Committees. Journal of Empirical Research on Human Research Ethics 4, 2 (June 2009), 37–48. https://doi.org/10.1525/jer.2009.4.2.37
[14]
Carole Cadwalladr and Emma Graham-Harrison. 2018. Revealed: 50 million Facebook profiles harvested for Cambridge Analytica in major data breach. The Guardian (March 2018). https://www.theguardian.com/news/2018/mar/17/cambridge-analytica-facebook-influence-us-election
[15]
Elizabeth Chell. 1998. Critical incident technique. In Qualitative methods and analysis in organizational research: A practical guide. Sage Publications Ltd, Thousand Oaks, CA, 51–72.
[16]
Daniel L. Chen, Martin Schonger, and Chris Wickens. 2016. oTree—An open-source platform for laboratory, online, and field experiments. Journal of Behavioral and Experimental Finance 9 (March 2016), 88–97. https://doi.org/10.1016/j.jbef.2015.12.001
[17]
Mauro Cherubini, Kavous Salehzadeh Niksirat, Marc-Olivier Boldi, Henri Keopraseuth, Jose M. Such, and Kévin Huguenin. 2021. When Forcing Collaboration is the Most Sensible Choice: Desirability of Precautionary and Dissuasive Mechanisms to Manage Multiparty Privacy Conflicts. Proceedings of the ACM on Human-Computer Interaction 5, CSCW1 (April 2021), 53:1–53:36. https://doi.org/10.1145/3449127
[18]
Jason W. Clark, Peter Snyder, Damon McCoy, and Chris Kanich. 2015. "I Saw Images I Didn’t Even Know I Had": Understanding User Perceptions of Cloud Storage Privacy. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems. ACM, Seoul Republic of Korea, 1641–1644. https://doi.org/10.1145/2702123.2702535
[19]
Ned Cooper, Tiffanie Horne, Gillian R Hayes, Courtney Heldreth, Michal Lahav, Jess Holbrook, and Lauren Wilcox. 2022. A Systematic Review and Thematic Analysis of Community-Collaborative Approaches to Computing Research. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems(CHI ’22). ACM, New Orleans LA USA, 1–18. https://doi.org/10.1145/3491102.3517716
[20]
Ruixuan Dai, Thomas Kannampallil, Jingwen Zhang, Nan Lv, Jun Ma, and Chenyang Lu. 2022. Multi-Task Learning for Randomized Controlled Trials: A Case Study on Predicting Depression with Wearable Data. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 6, 2 (July 2022), 50:1–50:23. https://doi.org/10.1145/3534591
[21]
Nicola Dell, Vidya Vaidyanathan, Indrani Medhi, Edward Cutrell, and William Thies. 2012. "Yours is better!": participant response bias in HCI. In Proc. of the CHI Conference on Human Factors in Computing Systems(CHI ’12). Association for Computing Machinery, Austin Texas USA, 1321–1330. https://doi.org/10.1145/2207676.2208589
[22]
Martyn Denscombe. 2010. The Good Research Guide: For Small-Scale Social Research Projects (4th edition ed.). Open University Press, Maidenhead.
[23]
Don A. Dillman, Jolene D. Smyth, and Leah Melani Christian. 2014. Internet, Phone, Mail, and Mixed-Mode Surveys: The Tailored Design Method, 4th Edition | Wiley (4th edition ed.). Wiley, Hoboken. https://www.wiley.com/en-fr/Internet%2C+Phone%2C+Mail%2C+and+Mixed+Mode+Surveys%3A+The+Tailored+Design+Method%2C+4th+Edition-p-9781118456149
[24]
Nickolas Dreher, Edward Kenji Hadeler, Sheri J. Hartman, Emily C. Wong, Irene Acerbi, Hope S. Rugo, Melanie Catherine Majure, Amy Jo Chien, Laura J. Esserman, and Michelle E. Melisko. 2019. Fitbit Usage in Patients With Breast Cancer Undergoing Chemotherapy. Clinical Breast Cancer 19, 6 (Dec. 2019), 443–449.e1. https://doi.org/10.1016/j.clbc.2019.05.005
[25]
Nico Ebert, Kurt Alexander Ackermann, and Peter Heinrich. 2020. Does Context in Privacy Communication Really Matter? — A Survey on Consumer Concerns and Preferences. In Proc. of the CHI Conference on Human Factors in Computing Systems(CHI ’20). Association for Computing Machinery, New York, NY, USA, 1–11. https://doi.org/10.1145/3313831.3376575
[26]
Nico Ebert, Björn Scheppler, Kurt Alexander Ackermann, and Tim Geppert. 2023. QButterfly: Lightweight Survey Extension for Online User Interaction Studies for Non-Tech-Savvy Researchers. In Proc. of the CHI Conference on Human Factors in Computing Systems. Association for Computing Machinery, Hamburg Germany, 1–8. https://doi.org/10.1145/3544548.3580780
[27]
Daniel A. Epstein, Jennifer H. Kang, Laura R. Pina, James Fogarty, and Sean A. Munson. 2016. Reconsidering the device in the drawer: lapses as a design opportunity in personal informatics. In Proc. of the Conf. on Ubiquitous Computing (UbiComp). Association for Computing Machinery, Heidelberg, Germany, 829–840. https://doi.org/10.1145/2971648.2971656
[28]
Lynne M. Feehan, Jasmina Geldman, Eric C. Sayre, Chance Park, Allison M. Ezzat, Ju Young Yoo, Clayon B. Hamilton, and Linda C. Li. 2018. Accuracy of Fitbit Devices: Systematic Review and Narrative Syntheses of Quantitative Data. JMIR mHealth and uHealth 6, 8 (Aug. 2018). https://doi.org/10.2196/10527
[29]
Denzil Ferreira, Vassilis Kostakos, and Anind K. Dey. 2015. AWARE: Mobile Context Instrumentation Framework. Frontiers in ICT 2 (April 2015). https://doi.org/10.3389/fict.2015.00006
[30]
Figma, Inc.2023. Figma. https://www.figma.com/
[31]
John C. Flanagan. 1954. The critical incident technique. Psychological Bulletin 51, 4 (1954), 327–358. https://doi.org/10.1037/h0061470
[32]
Theodore W Frick and Charles M Reigeluth. 1999. Formative Research: A Methodology for Creating and Improving Design Theories. Instructional-design theories and models: A new paradigm of instructional theory 2 (1999), 633–652.
[33]
Sandra Gabriele and Sonia Chiasson. 2020. Understanding Fitness Tracker Users’ Security and Privacy Knowledge, Attitudes and Behaviours. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems(CHI ’20). Association for Computing Machinery, New York, NY, USA, 1–12. https://doi.org/10.1145/3313831.3376651
[34]
Shaun Genter, Yazmín García Trejo, and Elizabeth Nichols. 2022. Drag-and-drop versus numeric entry options: a comparison of survey ranking questions in qualtrics. Journal of Usability Studies 17, 3 (May 2022), 117–130.
[35]
Federico Germini, Noella Noronha, Victoria Borg Debono, Binu Abraham Philip, Drashti Pete, Tamara Navarro, Arun Keepanasseril, Sameer Parpia, Kerstin de Wit, and Alfonso Iorio. 2022. Accuracy and Acceptability of Wrist-Wearable Activity-Tracking Devices: Systematic Review of the Literature. Journal of Medical Internet Research 24, 1 (Jan. 2022), e30791. https://doi.org/10.2196/30791
[36]
Ashley K Griggs, Marcus E Berzofsky, Bonnie E Shook-Sa, Christine H Lindquist, Kimberly P Enders, Christopher P Krebs, Michael Planty, and Lynn Langton. 2018. The Impact of Greeting Personalization on Prevalence Estimates in a Survey of Sexual Assault Victimization. Public Opinion Quarterly 82, 2 (June 2018), 366–378. https://doi.org/10.1093/poq/nfy019
[37]
Todd M. Gureckis, Jay Martin, John McDonnell, Alexander S. Rich, Doug Markant, Anna Coenen, David Halpern, Jessica B. Hamrick, and Patricia Chan. 2016. psiTurk: An open-source framework for conducting replicable behavioral experiments online. Behavior Research Methods 48, 3 (Sept. 2016), 829–842. https://doi.org/10.3758/s13428-015-0642-8
[38]
Daniel Harrison, Paul Marshall, Nadia Bianchi-Berthouze, and Jon Bird. 2015. Activity tracking: barriers, workarounds and customisation. In Proceedings of the 2015 ACM International Joint Conference on Pervasive and Ubiquitous Computing(UbiComp ’15). ACM, Osaka, Japan, 617–621. https://doi.org/10.1145/2750858.2805832
[39]
Dirk Heerwegh, Tim Vanhove, Koen Matthijs, and Geert Loosveldt. 2005. The effect of personalization on response rates and data quality in web surveys. International Journal of Social Research Methodology 8, 2 (April 2005), 85–99. https://doi.org/10.1080/1364557042000203107
[40]
Felix Henninger, Yury Shevchenko, Ulf K. Mertens, Pascal J. Kieslich, and Benjamin E. Hilbig. 2022. lab.js: A free, open, online study builder. Behavior Research Methods 54, 2 (April 2022), 556–573. https://doi.org/10.3758/s13428-019-01283-5
[41]
Kevin Huguenin, Igor Bilogrevic, Joana Soares Machado, Stefan Mihaila, Reza Shokri, Italo Dacosta, and Jean-Pierre Hubaux. 2018. A Predictive Model for User Motivation and Utility Implications of Privacy-Protection Mechanisms in Location Check-Ins. IEEE Transactions on Mobile Computing 17, 4 (April 2018), 760–774. https://doi.org/10.1109/TMC.2017.2741958
[42]
Solomon Hykes. 2023. Docker. https://github.com/moby/moby
[43]
Maritza Johnson, Serge Egelman, and Steven M. Bellovin. 2012. Facebook and privacy: it’s complicated. In Proceedings of the Symposium on Usable Privacy and Security. Association for Computing Machinery, Washington, D.C., 1–15. https://doi.org/10.1145/2335356.2335369
[44]
Simon L. Jones, William Hue, Ryan M. Kelly, Rosemarie Barnett, Violet Henderson, and Raj Sengupta. 2021. Determinants of Longitudinal Adherence in Smartphone-Based Self-Tracking for Chronic Health Conditions: Evidence from Axial Spondyloarthritis. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 5, 1 (March 2021), 16:1–16:24. https://doi.org/10.1145/3448093
[45]
Mohammad Taha Khan, Maria Hyun, Chris Kanich, and Blase Ur. 2018. Forgotten But Not Gone: Identifying the Need for Longitudinal Data Management in Cloud Storage. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. ACM, Montreal, QC, Canada, 1–12. https://doi.org/10.1145/3173574.3174117
[46]
Seoyoung Kim, Arti Thakur, and Juho Kim. 2020. Understanding Users’ Perception Towards Automated Personality Detection with Group-specific Behavioral Data. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems(CHI ’20). Association for Computing Machinery, New York, NY, USA, 1–12. https://doi.org/10.1145/3313831.3376250
[47]
Barbara Kitchenham, O. Pearl Brereton, David Budgen, Mark Turner, John Bailey, and Stephen Linkman. 2009. Systematic literature reviews in software engineering – A systematic literature review. Information and Software Technology 51, 1 (Jan. 2009), 7–15. https://doi.org/10.1016/j.infsof.2008.09.009
[48]
Henrik Kniberg and Mattias Skarin. 2010. Kanban and Scrum: making the most of both. C4Media, s. l.
[49]
Simon Kühne and Martin Kroh. 2018. Personalized Feedback in Web Surveys: Does It Affect Respondents’ Motivation and Data Quality?Social Science Computer Review 36, 6 (Dec. 2018), 744–755. https://doi.org/10.1177/0894439316673604
[50]
Reed Larson and Mihaly Csikszentmihalyi. 2014. The Experience Sampling Method. In Flow and the Foundations of Positive Psychology: The Collected Works of Mihaly Csikszentmihalyi, Mihaly Csikszentmihalyi (Ed.). Springer Netherlands, Dordrecht, 21–34. https://doi.org/10.1007/978-94-017-9088-8_2
[51]
Jonathan Lazar, Jinjuan Heidi Feng, and Harry Hochheiser. 2017. Chapter 10 - Usability testing. In Research Methods in Human Computer Interaction (Second Edition), Jonathan Lazar, Jinjuan Heidi Feng, and Harry Hochheiser (Eds.). Morgan Kaufmann, Boston, 263–298. https://doi.org/10.1016/B978-0-12-805390-4.00010-8
[52]
David Ledo, Steven Houben, Jo Vermeulen, Nicolai Marquardt, Lora Oehlberg, and Saul Greenberg. 2018. Evaluation Strategies for HCI Toolkit Research. In Proc. of the CHI Conference on Human Factors in Computing Systems(CHI ’18). Association for Computing Machinery, Montreal, QC, Canada, 1–17. https://doi.org/10.1145/3173574.3173610
[53]
Hyunsoo Lee, Soowon Kang, and Uichin Lee. 2022. Understanding Privacy Risks and Perceived Benefits in Open Dataset Collection for Mobile Affective Computing. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 6, 2 (July 2022), 61:1–61:26. https://doi.org/10.1145/3534623
[54]
Chantal Lidynia, Philipp Brauner, and Martina Ziefle. 2018. A Step in the Right Direction – Understanding Privacy Concerns and Perceived Sensitivity of Fitness Trackers. In Advances in Human Factors in Wearable Technologies and Game Design(Advances in Intelligent Systems and Computing), Tareq Ahram and Christianne Falcão (Eds.). Springer International Publishing, Cham, 42–53. https://doi.org/10.1007/978-3-319-60639-2_5
[55]
Panos Markopoulos, Nikolaos Batalas, and Annick Timmermans. 2015. On the Use of Personalization to Enhance Compliance in Experience Sampling. In Proceedings of the European Conference on Cognitive Ergonomics 2015(ECCE ’15). Association for Computing Machinery, New York, NY, USA, 1–4. https://doi.org/10.1145/2788412.2788427
[56]
Andrew Mathews and Colin MacLeod. 2005. Cognitive Vulnerability to Emotional Disorders. Annual Review of Clinical Psychology 1, 1 (2005), 167–195. https://doi.org/10.1146/annurev.clinpsy.1.102803.143916
[57]
Meta. 2023. React. https://github.com/facebook/react
[58]
Maria D. Molina, Emily S Zhan, Devanshi Agnihotri, Saeed Abdullah, and Pallav Deka. 2023. Motivation to Use Fitness Application for Improving Physical Activity Among Hispanic Users: The Pivotal Role of Interactivity and Relatedness. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems(CHI ’23). Association for Computing Machinery, New York, NY, USA, 1–13. https://doi.org/10.1145/3544548.3581200
[59]
Andras Molnar. 2019. SMARTRIQS: A Simple Method Allowing Real-Time Respondent Interaction in Qualtrics Surveys. Journal of Behavioral and Experimental Finance 22 (June 2019), 161–169. https://doi.org/10.1016/j.jbef.2019.03.005
[60]
Francisco Muñoz-Leiva, Juan Sánchez-Fernández, Francisco Montoro-Ríos, and José Ángel Ibáñez-Zapata. 2010. Improving the response rate and quality in Web-based surveys through the personalization and frequency of reminder mailings. Quality & Quantity 44, 5 (Aug. 2010), 1037–1052. https://doi.org/10.1007/s11135-009-9256-5
[61]
Evgeny Nikulchev, Dmitry Ilin, Pavel Kolyasnikov, Shamil Magomedov, Anna Alexeenko, Alexander N. Kosenkov, Andrey Sokolov, Artem Malykh, Victoria Ismatullina, and Sergey Malykh. 2021. Isolated Sandbox Environment Architecture for Running Cognitive Psychological Experiments in Web Platforms. Future Internet 13, 10 (Oct. 2021), 245. https://doi.org/10.3390/fi13100245
[62]
Jason Orlosky, Onyeka Ezenwoye, Heather Yates, and Gina Besenyi. 2019. A Look at the Security and Privacy of Fitbit as a Health Activity Tracker. In Proceedings of the 2019 ACM Southeast Conference(ACM SE ’19). Association for Computing Machinery, Kennesaw, GA, USA, 241–244. https://doi.org/10.1145/3299815.3314468
[63]
Xinru Page, Paritosh Bahirat, Muhammad I. Safi, Bart P. Knijnenburg, and Pamela Wisniewski. 2018. The Internet of What? Understanding Differences in Perceptions and Adoption for the Internet of Things. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 2, 4 (Dec. 2018), 183:1–183:22. https://doi.org/10.1145/3287061
[64]
Pallets. 2023. Flask. https://github.com/pallets/flask
[65]
Jenny Preece, Yvonne Rogers, and Helen Sharp. 2015. Interaction design: beyond human-computer interaction (fourth edition ed.). John Wiley & Sons Ltd, Chichester, West Sussex.
[66]
Kimberly C. Preusse, Tracy L. Mitzner, Cara Bailey Fausset, and Wendy A. Rogers. 2017. Older Adults’ Acceptance of Activity Trackers. Journal of Applied Gerontology 36, 2 (Feb. 2017), 127–155. https://doi.org/10.1177/0733464815624151
[67]
Ruth Ravichandran, Sang-Wha Sien, Shwetak N. Patel, Julie A. Kientz, and Laura R. Pina. 2017. Making Sense of Sleep Sensors: How Sleep Sensing Technologies Support and Undermine Sleep Health. In Proc. of the CHI Conference on Human Factors in Computing Systems(CHI ’17). Association for Computing Machinery, Denver, Colorado, USA, 6864–6875. https://doi.org/10.1145/3025453.3025557
[68]
Jungwook Rhim, Minji Kwak, Yeaeun Gong, and Gahgene Gweon. 2022. Application of humanization to survey chatbots: Change in chatbot perception, interaction experience, and survey data quality. Computers in Human Behavior 126 (Jan. 2022), 107034. https://doi.org/10.1016/j.chb.2021.107034
[69]
João Pedro Rodrigues, Nabor Mendonça, and Ivan Machado. 2021. Lyzeli: a tool for identifying the clues in survey research data. In Proceedings of the XXXV Brazilian Symposium on Software Engineering(SBES ’21). Association for Computing Machinery, New York, NY, USA, 347–352. https://doi.org/10.1145/3474624.3476018
[70]
Guido van Rossum. 2023. Python. https://github.com/python/cpython
[71]
Johnny Saldana. 2021. The Coding Manual for Qualitative Researchers (4e [fourth editiion] ed.). SAGE Publishing Inc, Thousand Oaks, California. https://uk.sagepub.com/en-gb/eur/the-coding-manual-for-qualitative-researchers/book273583
[72]
Kavous Salehzadeh Niksirat, Lev Velykoivanenko, Noé Zufferey, Mauro Cherubini, Kévin Huguenin, and Mathias Humbert. 2024. Wearable Activity Trackers: A Survey on Utility, Privacy, and Security. Comput. Surveys (2024). https://doi.org/10.1145/3645091 To appear.
[73]
Stefan Schneegass, Romina Poguntke, and Tonja Machulla. 2019. Understanding the Impact of Information Representation on Willingness to Share Information. In Proc. of the CHI Conference on Human Factors in Computing Systems(CHI ’19). Association for Computing Machinery, Glasgow, Scotland, UK, 1–6. https://doi.org/10.1145/3290605.3300753
[74]
Grace Donghee Shin. 2020. Investigating the impact of daily life context on physical activity in terms of steps information generated by wearable activity tracker. International Journal of Medical Informatics 141 (Sept. 2020), 104222. https://doi.org/10.1016/j.ijmedinf.2020.104222
[75]
David Stück, Haraldur Tómas Hallgrímsson, Greg Ver Steeg, Alessandro Epasto, and Luca Foschini. 2017. The Spread of Physical Activity Through Social Networks. In Proceedings of the 26th International Conference on World Wide Web(WWW ’17). International World Wide Web Conferences Steering Committee, Republic and Canton of Geneva, CHE, 519–528. https://doi.org/10.1145/3038912.3052688
[76]
Jose M. Such, Joel Porter, Sören Preibusch, and Adam Joinson. 2017. Photo Privacy Conflicts in Social Media: A Large-scale Empirical Study. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems(CHI ’17). Association for Computing Machinery, New York, NY, USA, 3821–3832. https://doi.org/10.1145/3025453.3025668
[77]
Niels van Berkel, Denzil Ferreira, and Vassilis Kostakos. 2017. The Experience Sampling Method on Mobile Devices. Comput. Surveys 50, 6 (Dec. 2017), 93:1–93:40. https://doi.org/10.1145/3123988
[78]
Niels van Berkel, Jorge Goncalves, Lauri Lovén, Denzil Ferreira, Simo Hosio, and Vassilis Kostakos. 2019. Effect of experience sampling schedules on response rate and recall accuracy of objective self-reports. International Journal of Human-Computer Studies 125 (May 2019), 118–128. https://doi.org/10.1016/j.ijhcs.2018.12.002
[79]
Lev Velykoivanenko, Kavous Salehzadeh Niksirat, Noé Zufferey, Mathias Humbert, Kévin Huguenin, and Mauro Cherubini. 2022. Are Those Steps Worth Your Privacy? Fitness-Tracker Users’ Perceptions of Privacy and Utility. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 5, 4 (Dec. 2022), 181:1–181:41. https://doi.org/10.1145/3494960
[80]
Jessica Vitak, Yuting Liao, Priya Kumar, Michael Zimmer, and Katherine Kritikos. 2018. Privacy Attitudes and Data Valuation Among Fitness Tracker Users. In Transforming Digital Worlds(Lecture Notes in Computer Science), Gobinda Chowdhury, Julie McLeod, Val Gillet, and Peter Willett (Eds.). Springer International Publishing, Cham, 229–239. https://doi.org/10.1007/978-3-319-78105-1_27
[81]
Jing Wang, Na Wang, and Hongxia Jin. 2016. Context Matters? How Adding the Obfuscation Option Affects End Users’ Data Disclosure Decisions. In Proceedings of the Int’l Conf. on Intelligent User Interfaces(IUI ’16). Association for Computing Machinery, 299–304. https://doi.org/10.1145/2856767.2856817
[82]
Miranda Wei, Madison Stamos, Sophie Veys, Nathan Reitinger, Justin Goodman, Margot Herman, Dorota Filipczuk, Ben Weinshel, Michelle L. Mazurek, and Blase Ur. 2020. What Twitter Knows: Characterizing Ad Targeting Practices, User Perceptions, and Ad Explanations Through Users’ Own Twitter Data. In Proceedings of the 29th USENIX Conference on Security Symposium(SEC’20). USENIX Association, USA, 145–162. https://www.usenix.org/conference/usenixsecurity20/presentation/wei
[83]
James Wen and Ashley Colley. 2022. Hybrid Online Survey System with Real-Time Moderator Chat. In Proceedings of the 21st International Conference on Mobile and Ubiquitous Multimedia(MUM ’22). Association for Computing Machinery, New York, NY, USA, 257–258. https://doi.org/10.1145/3568444.3570593
[84]
Nila Armelia Windasari, Fu-ren Lin, and Yi-Chin Kato-Lin. 2021. Continued use of wearable fitness technology: A value co-creation perspective. International Journal of Information Management 57 (April 2021), 102292. https://doi.org/10.1016/j.ijinfomgt.2020.102292
[85]
Jacob O. Wobbrock and Julie A. Kientz. 2016. Research contributions in human-computer interaction. Interactions 23, 3 (April 2016), 38–44. https://doi.org/10.1145/2907069
[86]
Ziang Xiao, Michelle X. Zhou, Q. Vera Liao, Gloria Mark, Changyan Chi, Wenxi Chen, and Huahai Yang. 2020. Tell Me About Yourself: Using an AI-Powered Chatbot to Conduct Conversational Surveys with Open-ended Questions. ACM Transactions on Computer-Human Interaction 27, 3 (June 2020), 15:1–15:37. https://doi.org/10.1145/3381804
[87]
Jerin Yasmin, Yuan Tian, and Jinqiu Yang. 2020. A First Look at the Deprecation of RESTful APIs: An Empirical Study. In 2020 IEEE International Conference on Software Maintenance and Evolution (ICSME). IEEE, Adelaide, Australia, 151–161. https://doi.org/10.1109/ICSME46990.2020.00024
[88]
Brahim Zarouali, Theo Araujo, Jakob Ohme, and Claes de Vreese. 2023. Comparing Chatbots and Online Surveys for (Longitudinal) Data Collection: An Investigation of Response Characteristics, Data Quality, and User Evaluation. Communication Methods and Measures 0, 0 (Jan. 2023), 1–20. https://doi.org/10.1080/19312458.2022.2156489
[89]
Xin Zhou, Archana Krishnan, and Ersin Dincelli. 2022. Examining user engagement and use of fitness tracking technology through the lens of technology affordances. Behaviour & Information Technology 41, 9 (July 2022), 2018–2033. https://doi.org/10.1080/0144929X.2021.1915383
[90]
Noé Zufferey, Mathias Humbert, Romain Tavenard, and Kévin Huguenin. 2023. Watch your Watch: Inferring Personality Traits from Wearable Activity Trackers. In Proceedings of the 32nd USENIX Conference on Security Symposium(SEC ’23). USENIX Association, Anaheim, CA, USA, 193–210. https://www.usenix.org/conference/usenixsecurity23/presentation/zufferey

Index Terms

  1. Designing a Data-Driven Survey System: Leveraging Participants' Online Data to Personalize Surveys

        Recommendations

        Comments

        Information & Contributors

        Information

        Published In

        cover image ACM Conferences
        CHI '24: Proceedings of the CHI Conference on Human Factors in Computing Systems
        May 2024
        18961 pages
        ISBN:9798400703300
        DOI:10.1145/3613904
        This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike International 4.0 License.

        Sponsors

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        Published: 11 May 2024

        Check for updates

        Badges

        • Honorable Mention

        Author Tags

        1. artefact
        2. online accounts
        3. surveys
        4. user interfaces

        Qualifiers

        • Research-article
        • Research
        • Refereed limited

        Data Availability

        Funding Sources

        Conference

        CHI '24

        Acceptance Rates

        Overall Acceptance Rate 6,199 of 26,314 submissions, 24%

        Contributors

        Other Metrics

        Bibliometrics & Citations

        Bibliometrics

        Article Metrics

        • 0
          Total Citations
        • 750
          Total Downloads
        • Downloads (Last 12 months)750
        • Downloads (Last 6 weeks)283
        Reflects downloads up to 13 Sep 2024

        Other Metrics

        Citations

        View Options

        View options

        PDF

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader

        HTML Format

        View this article in HTML Format.

        HTML Format

        Get Access

        Login options

        Media

        Figures

        Other

        Tables

        Share

        Share

        Share this Publication link

        Share on social media