Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3613904.3642404acmconferencesArticle/Chapter ViewFull TextPublication PageschiConference Proceedingsconference-collections
research-article
Open access

SustAInable: How Values in the Form of Individual Motivation Shape Algorithms’ Outcomes. An Example Promoting Ecological and Social Sustainability

Published: 11 May 2024 Publication History

Abstract

When thinking about algorithms, cold lines of code and purely rational decisions may come to mind. However, this picture is incomplete. Numerous examples illustrate how human aspects shape algorithmic output (e.g., via biased training data). This study delves into how developers’ and users’ individual differences can influence algorithmic output, focusing on environmental and altruistic motivation. In an online survey, (N = 766) participants rated different emails on their likelihood of being spam as input for a hypothetical spam-filter algorithm. Participants’ environmental motivation was negatively correlated with classifying emails from environmental and humanitarian organizations as spam. Thus, individuals with a stronger environmental motivation rated the emails in such a way that the spam filter was biased toward the common good. However, altruistic motivation had no impact on the ratings. These findings suggest that environmental motivation extends beyond pro-environmental behaviors by also influencing prosocial behaviors, thus offering insights for developing sustainable algorithms.

1 Introduction

The evolution of artificial intelligence (AI) is advancing, and yet, algorithmic systems are no longer a prospect of the future but have already become an integral part of the lives of many. Their reach has expanded from mundane applications (e.g., systems recommending music) to systems that make potential life-changing predictions about delinquents' recidivism rates [2]. In the financial sector, for instance, robo-advisors move significant financial assets. In human resource management, algorithmic systems support personnel selection; and in cybersecurity, they are used to detect cyber-attacks. Given the wide-ranging pervasiveness of algorithms, discussions about how to ensure that these systems serve the welfare of society at large are increasing. Such discussions include the desire for them to be fair, so they do not systematically favor or disadvantage certain groups or individuals on the basis of inherent characteristics [37]. Yet, algorithms have produced biased output and have systematically discriminated against specific groups of people [41, 43]. To understand the emergence of such biases and to work toward their mitigation, interventions have been developed (e.g., in the form of legislation [14]), and intensive research has examined various sources of bias and their mitigationeither technical [43] or by raising awareness of potential sources of bias [13]. Several scholars and practitioners are debating whether a “code of ethics” should be programmed into algorithmic systems to ensure that the generated output is aligned with human values [59], thus giving rise to a growing strand of research focusing on machine ethics, e.g., [50]. Bringing the issue of values in AI to the table marks an important starting point. However, we argue that, in many cases, regulation and guidelines are not enough to obtain nondiscriminatory algorithms. Although such regulations can serve as a framework, they cannot cover all aspects and decisions involved in the development of algorithms. Naturally, there are still many degrees of freedom in the development and training of algorithmic systems, and it is impossible for anyone to foresee all eventualities. However, on a higher level, the goals of an algorithm are largely determined by humans—primarily the developers, entrepreneurs, and users who create, disseminate, and train these systems. Given this context, understanding the extent to which humans can influence algorithmic output is paramount. We argue that a stakeholders motivation will find its way into an algorithm and its outputeven independent of the algorithm's primary goal. Motivation refers to the psychological driving force behind behavior [16]. In many contexts (e.g., in the context of environmental motivation), motivation is based on a person's values [24, 29]. In other words, motivation directs a person's inherent values into behavior (e.g., a decision). In this study, we focus on a spam-filter algorithma case where, in training, individuals may tend to make value-based decisions that are driven by their motivation. A spam filter has the primary objective to reduce the amount of spam (i.e., unsolicited messages sent to many users [38]) that lands in users inboxes. However, there will be many (spam) emails whose classification as spam is highly dependent on a person's individual preferences (e.g., values), which form a person's motivation and, hence, guide their behavior [23]. Such motivation is continuously and often unconsciously at work in individuals daily lives and work lives. Applied to the context of algorithms, it can thus be assumed that a person's motivation can influence their decisions while developing and training algorithms. An infamous example of motivation that made it into negatively motivated speech is Microsoft's AI chatbot "Tay," which rapidly descended into bigotry on Twitter after interacting with users. Conversely, it is also likely that stakeholders with a strong concern for the welfare of others influence the algorithmic decision process in such a way that its output is biased toward the common good. For instance, a company or a team of developers who are strongly environmentally motivated would perhaps set the default for a navigation system to choose the route with the lowest CO2 emissions, just as they would in “real life” [57].
Algorithmic output is tied to the data on which the algorithm was trained, as the algorithm learns patterns from its training data and applies these patterns to make predictions in new data [17]. Thus, if these data reflect biased decisions, the output will be biased as well. It is likely that individuals for whom social and ecological sustainability is a core value that is reflected in their altruistic motivation (i.e., the motivation to enhance the welfare of others [4]) or environmental motivation (i.e., the motivation to benefit the natural environment, see, e.g., [33]) also make decisions that may shape the algorithmic output toward sustainability while developing and training algorithms.
Therefore, the aim of this study was to examine how individual differences in stakeholders' environmental and altruistic motivation could—through the rating of training material—influence an algorithm's decisions and thus its outcome. More precisely, we examined whether environmental and altruistic motivation are related to decisions when training a spam-filter algorithm. The role of stakeholders’ motivation in algorithm training has been largely unexplored thus far, and understanding these influences can help to either mitigate biases arising from value-based decisions, or, by contrast, build on this effect to design algorithms that serve society as a whole. Our study contributes to research on algorithmic bias by addressing the following research questions:
Can stakeholders’ (i.e., those who develop or train an algorithm) motivation influence an algorithm's output?
a.
Does environmental motivation influence individuals’ decisions when training an algorithm?
b.
Does altruistic motivation influence individuals’ decisions when training an algorithm?

2 Related Work

2.1 The Emergence of Bias in Algorithmic Output

Algorithmic systems are designed to generate output (e.g., a decision) on the basis of patterns in data from past instances (i.e., training data). These data, especially if they are social data, frequently contain biases and incorporate social stereotypes [55], which are then inherited and perpetuated by the model [20]. Mehrabi et al. [37] provided a comprehensive framework classifying various potential sources of bias in algorithmic output into three categories: data-to-algorithm bias, algorithm-to-user bias, and user-to-data bias. The first category, data-to-algorithm bias, highlights the limitations of models when they do not adequately represent the population (e.g., a lack of diversity in image data sets can cause facial recognition algorithms to perform differently on the basis of a person's skin type and gender [7]). The second category, algorithm-to-user bias, is especially prominent in recommendation and content management algorithms, as these influence the visibility and ranking of content. Users, in turn, are more likely to engage with content that is more visible or ranked higher, thus reinforcing its prominence. The third category is user-to-data bias. This category encompasses different biases, such as historical bias, where patterns inherent in human-generated data [8] are learned by the algorithm and eventually lead to discrimination (this bias is evident in scenarios such as occupational roles traditionally held by men or women, leading to biased representations in the training data). Another example of user-to-data bias is content production bias. It arises from variations in how different populations generate content, for instance, because they differ in their use of language. This bias can arise from any person who generates data—not only from users but also from curators who train the algorithm. Values influencing an algorithm via stakeholders’ motivations belong to the category of user-to-data bias. The training data would be skewed in such a way that it reflects a set of values that might not correspond to the overall population of users.
We draw from research from Psychology and Communication Science to understand the emergence of user-to-data bias. Like human decisions in general, decisions made during the training of algorithms (e.g., when curators rate the responses of a chatbot as appropriate or inappropriate, or when emails are classified as spam or not spam) may depend on the (value-relevant) involvement—that is, individuals’ motivation to behave in a manner that is aligned with their values regarding the issue [16, 19, 35]. This involvement influences how individuals process information and results in people being less easily persuaded to change their attitudes [19]. Research from environmental psychology has found that involvement (i.e., the motivation to protect the environment—with values being an integral part of it) is related to conservation behavior, e.g., [ 22, 16]. Applied to the context of algorithms, this association implies that, when training the algorithm, people with a high level of involvement (i.e., a high level of environmental motivation) are likely to make decisions that are consistent with their motivation to protect the environment. In terms of the case study of our spam-filter algorithm, this means that the extent to which people value environmental motivation may be related to the positivity of their perceptions of emails on this topic.
Researchers examined the consequences of this involvement with regard to sharing information in social networks [12, 51]. Similar to the training of various algorithms (e.g., spam filters, chatbots, or content moderation), people make decisions about the visibility of certain content for other people. For instance, by frequently sharing health-related news on social media, individuals influence their network's feed, determining which topics are more visible to their contacts. Scholz et al. [51] argued that such information sharing is guided by value-based decision-making. Thus, when deciding whether to share content, a person weighs the positive effects of sharing (e.g., providing access to relevant information) against possible costs (e.g., the risk that the message will not be well-received in one's social group) and chooses the option that offers the highest value for them. The likelihood of sharing increases the more a person perceives a message as relevant to themselves or to other people [12]. This inclination to share content that is relevant for oneself suggests that when training algorithms— especially in the aforementioned contexts—individuals’ decisions are similarly related to their perceptions of relevance and their personal involvement in the topic. As algorithm curators are usually anonymous, the costs (risks) are negligible compared with message sharing in social networks. Consequently, in the case of algorithms, the decision of what content should be visible likely depends even more on personal relevance. We expect that for people with high involvement (i.e., high environmental or altruistic motivation), the self-relevance of emails dealing with the respective issue will be higher.
Besides biases in the data used to train algorithms, human decisions made during the development process (e.g., when specifying model parameters) may still lead to biased algorithmic output [26, 27, 37, 42]. Thus, whenever the development process allows for degrees of freedom, it is likely that developers will base their programming decisions on their own needs and preferences, which form their motivation [45, 60], and they will thereby make value-consistent decisions. This self-referential approach, in which values (sometimes unconsciously) guide the decision-making process, extends to the training of algorithms, particularly in the case of machine learning models, which rely heavily on human curation. Therefore, we propose that developers’ and curators’ motivation can influence their decisions when developing and training an algorithm. Similarly, the decisions made by users interacting with these systems can potentially be shaped by their values. A relationship between values and decisions has already been empirically identified (e.g., with regard to social decisions [53]). We aim to apply these findings to the development and use of algorithms and to empirically show that stakeholders' values—through their motivations—affect their decisions and, in turn, may lead to biased algorithmic output.
Existing studies have predominantly focused on exploring unfair biases in algorithmic output [30], that is, biases that lead to systematic discrimination [15]. In this work, we aim to adopt a broader definition of bias, considering it in its original—morally neutral—sense as a simple “skewness” or “slant.” By using this broad definition, we could also focus on nondiscriminatory biases and thus discuss their potential opportunities.

2.2 Reducing Bias and Incorporating Values into Algorithms

In addition to research on the emergence of bias, another strand of research focuses on developing methods to reduce bias in algorithmic output. Such research includes approaches for raising stakeholders' awareness as well as the development of fairness measures and algorithms capable of mitigating bias, e.g., [3]. To reduce bias, researchers have emphasized the importance of caution and education on the emergence of bias in the development process, as well as a diverse workforce [3]. Continuous monitoring of algorithms even after the training period and algorithm audits are also recommended as tools for identifying biases [26, 32, 36]. Kleinberg et al. [27] further noted that with appropriate laws and regulatory systems in place, algorithms would potentially facilitate the detection and prevention of discriminating biases, as algorithmic decision rules—although often described as a black box—can in fact be made more transparent than human decision-making processes. From this angle, algorithms could even be regarded as a force for the common good by “nudging” user behavior in a less discriminatory (and thus more sustainable) direction [28]. Taking this idea further, not only the reduction of discrimination but also a bias toward the common good or ecological sustainability would lead to more sustainable algorithmic decisions, which might then nudge user behavior. The notion of incorporating values and ethical considerations into algorithms has also been discussed in research on machine ethics [58, 59], with representatives of this line of research emphasizing the need to incorporate values into algorithmic decisions in order to prevent unethical outcomes, although the success of this approach has been criticized [6]. Among those who advocate for incorporating values into algorithms, sustainability is a value that is increasingly recognized as a crucial aspect of AI design [58].
In software development in general, sustainability is also becoming increasingly relevant, as reflected in the development of guidelines to incorporate sustainability considerations into the development process [46]. However, besides guidelines, human factors—particularly with regard to stakeholders' values and the motivation related to these values—also play a role in software development. A diverse and educated workforce can help reduce discriminatory biases by identifying them and applying tools to reduce them [3]. Moreover, it is also likely that a more prosocial or environmentally motivated workforce would incorporate these values more strongly into an algorithm, as values may be reflected in the decisions people make during algorithm development and training. Lammert [31] postulated that the values of developers and other stakeholders of a software product are reflected in the product and thereby also influence its sustainability impact. However, this claim has yet to be empirically investigated.

2.3 The present study

The aim of this study was to explore whether stakeholders' prosocial and environmental motivation influences the preparation and selection of the training material that finally goes into the training of algorithms and in the end has an effect on algorithmic output and can thus create a bias toward ecological or social sustainability. We aimed to address the abovementioned research gaps by (a) examining the relationship between stakeholders’ motivations and algorithmic training material as the primary source of bias and (b) exploring how to potentially increase sustainability in algorithmic output. Specifically, we investigated the relationship between the motivation to benefit other people or, more generally, nature (i.e., environmental and altruistic motivation) and the decisions people make when training a spam-filter algorithm. We hypothesized:
H1: Stakeholders’ environmental motivation is negatively related to categorizing emails from environmental organizations as spam.
H2: Stakeholders’ altruistic motivation is negatively related to categorizing emails from humanitarian organizations as spam.
Previous research has shown that prosocial and environmental motivation are correlated [39]. Related to this finding, studies have shown that behaviors directed toward the well-being of other people and toward the environment are linked to the same personality traits [10, 44, 56]. Otto et al. [44] argued that there is an overarching factor that can be regarded as a prosocial propensity or an orientation toward the welfare of others and that it can be directed toward both human and nonhuman entities. This orientation can manifest in motivation for both pro-environmental and prosocial behaviors. Thus, due to the close connection between these two constructs, we expected a spillover effect in spam categorization. That is, a high degree of environmental motivation might lead not only to protecting the environment but also to helping people. In turn, this motivation influences the categorization of emails from social organizations (and vice versa). Therefore, we hypothesized:
H3a: Stakeholders’ altruistic motivation is also negatively related to categorizing emails from environmental organizations as spam (however, the effect is smaller than for humanitarian organizations).
H3b: Stakeholders’ environmental motivation is also negatively related to categorizing emails from humanitarian organizations as spam (however, the effect is smaller than for environmental organizations).

3 Method

We computed an a priori power analysis with the R package “pwr” [54], resulting in a necessary sample size of N = 750 participants to be able to detect small effects (r = .10) with an alpha level of .05 and a power of .80 [11]. We preregistered our hypotheses on the Open Science Framework (https://osf.io/2ae49 ). Please note that hypotheses were preregistered along with another study.

3.1 Design and Procedure

Data were collected between September 10 and 16, 2021, with the help of a research panel. Participants were incentivized by the market research company to participate in the study. All participants gave informed consent, and we adhered to the ethical guidelines of the American Psychological Association [1].

3.1.1 Sample.

The sample was representative of the German internet-using population in terms of gender, age, education level, and state. After excluding participants with wrong answers in the attention check, more than 25% missing answers, or response times that were too short (more than twice as fast as the median response time, see [34]), the resulting sample consisted of N = 766 participants between 18 and 65 years of age. The mean age was 43.28 (SD = 13.77), and 50.1% identified as women, whereas 49.9% identified as men. Participants received a compensation of €1.25 for their participation and took a median time of 21.36 minutes to complete the survey. Participants were free to stop and resume answering the questions anytime. Thus, the measured time might overestimate the actual time spent on the questionnaire.

3.1.2 Procedure.

After the participants received brief information about the study and were informed about the voluntary nature of the study and the use of the data, demographic variables were assessed. Then, participants were instructed to train a spam filter, that is, they read different emails and indicated whether they would classify the respective email as spam or not. Appendix 1 shows an example of the stimulus material.
Each participant read the same eight emails: Two emails served as a check for whether participants understood the task—one of them was a real spam email, and the other one was an email from a reputable sender that should receive the lowest spam ratings. The other six emails differed in their content. These emails were assigned to three categories: pro-self (emails from a bank), pro-social (emails from humanitarian organizations), and pro-environmental (emails from environmental organizations), with two emails assigned to each of these categories. Table 1 provides an overview of the eight emails.
Table 1:
CategorySenderPurposeSource
not spamtax authority (ELSTER)information (message about an e-certificate for an online tax portal)original email
pro-self personal emailbank (Deutsche Bank)information (new documents in online banking account)original email
pro-self impersonal emailbank (Commerzbank)advertising / call to open stock portfoliooriginal email
pro-social personal emailhumanitarian organization (UNICEF)thank you for your donationwebsite
pro-social impersonal emailhumanitarian organization (Brot für die Welt)request to sign a petitionwebsite
    
pro-environmental personal emailenvironmental organization (NABU)thank you for your donationwebsite
pro-environmental impersonal emailenvironmental organization (Greenpeace)request to sign a petitionwebsite
spam emailbitcoin millionairespam (how to become a bitcoin millionaire)original email
    
Table 1: Overview of the different emails used as stimulus material
All of them were either based on real emails that the researchers had received themselves or were generated from information from the website of the respective organization. The purposes of the two emails in each category also differed (personal information vs. impersonal call to action). These different purposes served to provide variation in the difficulty of categorizing the emails as spam, with personal emails expected to be categorized as spam to a lower extent. Furthermore, all emails contained a request to click on a hyperlink. After each email, participants rated whether they would classify the email as spam or not. After the rating, the independent variables were assessed. In addition, the questionnaire included other variables that were collected for a different study.

3.2 Measures

If not otherwise stated, all variables were assessed with 5-point scales, either ranging from 1 (does not apply at all) to 5 (fully applies) or with the following anchors for items assessing behaviors: 1 = never, 2 = rarely, 3 = occasionally, 4 = often, 5 = very often/always.

3.2.1 Altruistic motivation.

To assess altruistic motivation, we used 17 items from the Self-Report Altruism scale [47]. Because the study was conducted during the COVID-19 pandemic, some items were omitted, as they describe behaviors that would violate social distancing etiquette. Therefore, one more item—adapted for the pandemic context—was added (“During the COVID-19 pandemic, I made purchases for a person in the risk group or in quarantine”). To be able to perform a Rasch analysis, all items were transformed into a dichotomous format, see also [44]. Answers on the first three levels of the scale were recoded as 0 (low altruistic motivation), and answers on the fourth and fifth levels were recoded as 1 (high altruistic motivation). The person separation reliability was satisfactory (r = .79).

3.2.2 Environmental motivation.

We used the 40 items from the General Ecological Behavior scale to assess participants' environmental motivation [23, 25]. All items measure attitudes via actual and previously performed behaviors (e.g., “I buy convenience food”). Of the 40 items, 11 had a dichotomous format with the options 0 (person does not exhibit the respective behavior) and 1 (person exhibits the respective behavior). As described above, all polytomous items were dichotomized. Again, the separation reliability was sufficient (r = .76).

3.2.3 Spam ratio.

Participants were instructed to train a spam filter so that it can develop rules for classifying mails as spam. We did not specify how they should distinguish spam mails, so that sufficient variance in the ratings could arise from individual differences in motivation. Participants rated all eight emails on a 5-point scale ranging from 1 (definitely not spam) to 5 (definitely spam), with one alternative response category (“I would leave this choice to the user”).
For all individuals who rated both pro-environmental emails (or both prosocial emails), the classification of emails from environmental or humanitarian organizations was weighted against the person's overall spam rating of all other emails that were assessed. This weighting was done to take into account individual tendencies to more or less frequently categorize emails as spam in general. This ratio served as an additional measure, controlling for a potential relationship between altruistic or environmental motivation with an overall tendency to categorize emails as spam.

3.3 Data Analysis

Data analyses were performed with RStudio [48]. For both environmental and altruistic motivation, we conducted a Rasch analysis [5, 49]. All items demonstrated an MS-infit below 1.3, indicating a satisfactory model fit [5]. For plausibility checks, we conducted a series of paired t tests to test whether (a) the real spam email was significantly more likely to be classified as spam than all other emails and (b) the email from the tax authority (not spam) was significantly less likely to be classified as spam than all other emails. The results confirmed that the genuine spam email received higher spam ratings, and the email from the tax authority received lower spam ratings than all other emails. Furthermore, we tested whether emails from humanitarian and environmental organizations differed in their spam ratings. Emails from humanitarian organizations were more frequently classified as spam (M = 3.78, SD = 1.19) than emails from environmental organizations (M = 3.31, SD = 1.32), t(660) = -9.70, p < .001, 95% CI[-0.50, -0.33].

4 Results

The correlations of the independent variables with the spam ratios are presented in Table 2. Altruistic and environmental motivation were correlated to a small to medium extent (r = .26***). In support of H1 and H3b, environmental motivation was further negatively correlated with categorizing emails from environmental organizations (r = -.19***) as well as humanitarian organizations (r = -.14***) as spam. That is, individuals with a more pronounced environmental motivation showed a tendency to categorize emails from environmental (H1) and humanitarian organizations (H3b) as spam less often than those with less environmental motivation. Contrary to our expectations in H1 and H3a, altruistic motivation was not significantly correlated with the categorization of emails from environmental or social organizations.
Table 2:
VariableNMSD123
1. Altruistic motivation7660.501.56-  
       
2. Environmental motivation766-0.000.84.26***- 
       
3. Spam ratio (environment)6920.870.33.04-.19***-
       
4. Spam ratio (humanitarian)7060.980.25.01-.14***.44***
       
Table 2: Means, standard deviations, and correlations of the relevant variables
Note: * indicates p < .05. ** indicates p < .01. *** indicates p < .001. Spam ratio (environment) refers to the categorization of emails from environmental organizations as spam relative to the other emails; Spam ratio (humanitarian) refers to the categorization of emails from humanitarian organizations as spam relative to the other emails.
We further calculated correlations between altruistic and environmental motivation and the eight email ratings. Table 3 presents the results.
Table 3:
VariableNMSD123456789
1. Altruistic Motivation7660.501.56         
             
2. Envir. Motivation766-0.000.84.26***        
             
3. Tax authority6942.781.71-.10*.01       
             
4. bank (p)6763.541.64-.11**-.00.33***      
             
5. bank (i)6944.191.17-.02-.02.06.16***     
             
6. humanitarian org. (p)6403.761.36-.04-.11**.09*.13**.27***    
             
7. humanitarian org. (i)6433.791.29-.04-.10*.09*.13**.28***.49***   
             
8. environmental org. (p)6353.041.54-.02-.17***.17***.15***.25***.47***.44***  
             
9. environmental org. (i)6243.611.37.02-.16***.07.11**.25***.51***.57***.49*** 
             
10. spam email7344.780.72-.01.13***.00.12**.28***.14***.21***.05.10*
             
Table 3: Means, standard deviations, and correlations of all spam emails
Note: * indicates p < .05. ** indicates p < .01. *** indicates p < .001. (p) denotes personal emails (e.g., containing a thank you for a donation), (i) denotes impersonal emails (e.g., a request to sign a petition).
The correlations between environmental motivation and the ratings of emails from humanitarian and environmental organizations were consistent for both personalized and nonpersonalized emails. Moreover, a positive correlation was observed between environmental motivation and ratings of the genuine spam email as spam (r = .13). Altruistic motivation was not correlated with either of the ratings of emails from environmental or humanitarian organizations. However, it was slightly negatively correlated with the spam rating of the tax authority's email (not spam, r = -.10*) and the personalized email from a bank (r = -.11**). Thus, individuals with higher levels of altruistic motivation were less likely to classify these two emails as spam. Furthermore, predominantly significant positive correlations were found between the different email ratings.
To examine the extent to which environmental and altruistic motivation jointly predicted spam categorization, we ran two separate multiple linear regression analyses after determining that the assumptions (homoscedasticity, normally distributed residuals, and no multicollinearity) were met. These analyses aimed to explore the relationships between these types of motivation and the two spam ratios. The results of these regression analyses are presented in Tables 4 and 5, respectively.
Table 4:
VariableBSE(B)βp
Intercept0.860.01 < .001
Altruistic Motivation0.020.010.10.011
Environmental Motivation-0.080.02-0.22< .001
R²   .05
Table 4: Results of the multiple linear regression analysis with spam ratio (environment) as criterion
Note: N = 766
The model in Table 4 significantly predicted the spam ratio (environment), F(2, 689) = 16.3, p < .001, R2 = .05, with environmental motivation being the strongest predictor, β = -0.22, p < .001. When the two predictors were considered jointly, altruistic motivation was positively related to categorizing emails from environmental organizations as spam, but the effect was very small (β = 0.10, p = .011).
Table 5:
VariableBSE(B)βp
Intercept0.980.01 < .001
Altruistic Motivation0.010.010.04.282
Environmental Motivation-0.040.01-0.15<.001
R²   0.02
Table 5: Results of the multiple linear regression analysis with spam ratio (humanitarian) as criterion
Note: N = 766
The model in Table 5 significantly predicted the spam ratio (humanitarian), F(2,704) = 7.11, p < .001, R2 = .02. Again, environmental motivation was the strongest predictor, β = -0.15, p < .001, whereas altruistic motivation did not significantly predict the spam ratio for emails from humanitarian organizations.
The aim of this study was to investigate the influence of individuals’ motivation, specifically environmental and altruistic motivation, on the classification of training material for an algorithm. Our findings revealed that individuals with a more pronounced environmental motivation were less likely to classify emails from environmental or humanitarian organizations as spam. By contrast, although correlated with environmental motivation, altruism was not related to the classification of emails from either environmental or humanitarian organizations as spam. In fact, when examined jointly with environmental motivation in the regression analysis, altruistic motivation even exhibited a small positive association with the categorization of environmental emails as spam.
Our results show that individuals’ environmental motivation influenced their decisions during the rating task, thus implying that, depending on their motivation, individuals would have trained an algorithm to permit more emails from environmental and humanitarian organizations to pass through. This finding is in line with Lammert's [31] expectation that software engineers’ values affect the sustainability outcome of a software product. Furthermore, our results show that also in the training of algorithms, individuals’ motivation influences individuals' behavior and, more specifically, their decisions, a finding that is consistent with the general expectation that motivation and attitudes influence individual behavior [23]. Thereby, our study provides empirical evidence for the emergence of the user-to-data bias [37]. Furthermore, the relationship observed between environmental motivation and decisions during the algorithm training task is aligned with existing research on involvement [16]. Individuals' involvement (i.e., their value-based motivation), particularly in areas of personal relevance (e.g., environmental issues), can significantly sway their decision-making process toward fostering value-based decisions during algorithm training. Our findings further corroborate research on social media content sharing [12], where personal values have been found to guide sharing decisions, thereby creating a bias in the transmission of information [52]. The present study extends this understanding to the domain of algorithm training, suggesting that similar principles of value-based decision-making apply. Just as individuals are more likely to share content on social media when it is related to topics that are personally relevant to them, they also tend to train a spam filter algorithm in ways that correspond to their motivation.
Contrary to our expectations, altruistic motivation did not have an impact on spam classification, a finding that might be explained by several factors. The task of classifying emails might not have been perceived as an opportunity to benefit other humans. As the correlation table shows, participants with a higher altruistic motivation were less likely to classify genuine personal emails (e.g., those from the financial administration or a bank) as spam. This could indicate a focus on the welfare of the recipients, ensuring that legitimate personal emails do not end up in the spam folder. Furthermore, perceptions of humanitarian organizations might have influenced the results. Participants may have held a more negative view of humanitarian organizations than of environmental organizations (visible in an overall higher rate of classifying emails from humanitarian organizations as spam versus emails from environmental organizations). Due to negative press on some organizations regarding the mismanagement of donations, e.g., [9], participants with a higher altruistic motivation might have aimed to prioritize protecting email recipients over promoting humanitarian organizations. This tendency is supported by the finding that altruistic motivation was not related to classifying emails from environmental organizations as spam either. In addition, when examined jointly with environmental motivation, altruistic motivation was even positively related to categorizing environmental emails as spam. Thus, the aim of helping other individuals might have been channeled differently in this categorization task.
The finding that individuals with a more pronounced environmental motivation were also less likely to classify emails from humanitarian organizations as spam is consistent with research on the relationship between prosocial and pro-environmental behavior [40, 44]. It is likely that environmentalists have a more inclusive self-concept, that is, they more easily identify with human and nonhuman others [21]; therefore, they want not only to protect the environment but also to benefit other humans. Thus, this view could extend to a more favorable view of social organizations and a lower likelihood of classifying their emails as spam.
To summarize, our findings provide the following main contributions to the literature on algorithmic biases: Our study emphasizes the need to consider the motivation (and on a higher level, the values) of stakeholders of an algorithmic system and their role in the emergence of bias. A growing body of research is dedicated to discovering and developing technical methods for minimizing biases in training data, [37]. Our research shows that in addition to technical means, it is also important to consider the workforce by taking into account not only their diversity [3] but also their motivation and underlying values. Furthermore, our findings introduce a new perspective: examining algorithmic biases as an opportunity to align the algorithms with other values if needed. Our study underscores the idea that algorithm development is not just about minimizing algorithmic biases but also about understanding and potentially leveraging the algorithms so that they are aligned with broader societal values. For instance, to assess creditworthiness, a credit scoring algorithm typically focuses on historical financial data. However, if the developers value financial inclusivity, they might incorporate additional (not typically used) data points (e.g., rental history) in the model. Such inclusivity would offer better opportunities to certain groups of people who are disadvantaged by other scoring algorithms.

4.1 Expanding the scope beyond spam filters

In our study, we chose spam filters as a specific case study due to their relatively narrow but rather controlled environment. The behavioral shifts observed in the spam filter training task may significantly impact the flow of information when scaled to larger systems. In these broader contexts, slight biases, akin to those in this study, can impact the visibility and dissemination of content, thereby influencing public access to information. While the specific task of categorizing emails may seem limited in scope, it mirrors the decision-making processes found in many more complex algorithmic systems. In other, more complex contexts, the basic mechanism will be quite similar if not the same because even though an amalgamate of several motives (and values) can be at work in complex contexts, motives as well as values are usually compensatory, and thus, they simply add up independently.
Just like spam filters, such systems often involve sorting and classifying information. For instance, large language models learn through curators’ ratings of whether the content is appropriate or not. In the realm of social media, content moderation algorithms function similarly to spam filters to determine the appropriateness of content. Similarly, news feed algorithms curate a user's news stream on the basis of perceived interests. In recommendation systems—whether for e-commerce, streaming services, or online advertising—the sorting and recommending of items to users is another domain where the decision-making process can be subtly influenced by the values and biases of the stakeholders.
Whereas the systems described above are algorithms that are similar to spam filters with respect to their training (i.e., information is sorted by humans), the fundamental assumption that values can influence algorithmic output through motivation is applicable to a variety of contexts. Our research focused on content curators; however, similar effects could emerge among developers, users, and other stakeholders. Developers’ motivations could influence their decisions in the development process in a similar manner. On the other hand, user feedback further shapes algorithmic decision-making. Thus, the supposedly small influences of the values and motivations of certain groups could have a large impact all together.

4.2 Practical implications

Our findings contribute to the understanding and management of biases in algorithmic systems. Recognizing these underlying values can help developers and policymakers create algorithms that are sustainable and socially responsible. For instance, it may be instrumental to integrate sustainability-focused training and interventions into the educations or workplaces of those who develop or curate algorithms. Fostering a culture of environmental stewardship and social responsibility in tech companies might provide a basis for these motivations to be reflected in the decisions made during the development and training of algorithms and, thus, algorithmic output. Whereas on the one hand, our results suggest that the influence of individual values on algorithmic output could be utilized to promote positive outcomes, see also [61], our results also highlight that values may also have unintended consequences in the form of unintended biases that need to be mitigated. For instance, clients who want to apply an algorithm might not want sustainability motivation (or other motivations) to influence training decisions. The biased selection of training material, as observed in our study, echoes concerns similar to those found in research on social media, such as the selective sharing of posts. For instance, Shin and Thorson [52] showed that partisans tend to spread content that favors their perspective while criticizing the opposition—a bias that could similarly affect algorithms responsible for content selection and dissemination. Thus, raising awareness about such value- or motivation-based biases is crucial for developing strategies to reduce them. For instance, it may be necessary to discuss (a) whether and how the influence of stakeholders’ values and motivations could or should be taken into account in regulations and guidelines pertaining to algorithm development and (b) potential ways to reduce such influences. In their recent work, Hardy et al. [18] adapted a content selection algorithm to be used in a social network. The aim was for the algorithm to rearrange the content from individuals’ personal networks in such a way that it was representative of the perspectives of the population. This approach enables a more balanced flow of information and reduces echo chambers.

4.3 Limitations and future research

There are several limitations to consider when interpreting the results. First, the fact that we did not find an influence of altruistic motivation on the categorization of emails might be due to the measurement method and the material we used. The scale we used is widely applied to assess altruism; however, it might not have adequately captured the nuances of altruistic motivation that would affect spam ratings, or the ratings might be influenced by a negative image of humanitarian organizations. Future studies should assess this relationship using different measures and by using different and more diverse stimulus material. Second, we used a cross-sectional design; thus, the directions of the relationships can be derived only from theory. Third, our study focused on a specific use case, and the effect sizes were rather small. We chose spam filters as a specific case study, which, while narrow, provided a more controlled environment to observe the subtle influence of values on decision-making. Recognizing and understanding these effects lays the groundwork for further research that can investigate the implications of such value-based decisions in broader, real-world contexts where decisions are made at scale and where their impacts can be far-reaching.

5 Conclusion

Our study reveals that the individual values of those who develop and train algorithms—or more specifically, their environmental motivation—can influence the training of algorithms and, thus, algorithmic output. Understanding these dynamics provides the opportunity to design algorithms that are slightly biased toward social and ecological sustainability, which eventually leverages sustainable behavior in individuals. Simultaneously, this finding emphasizes that the decisions made by the stakeholders of an algorithm can also inadvertently shape the algorithm's output in undesirable directions, and thus, awareness of this source of bias is important.

Acknowledgments

The authors thank the
  • Stiftung Innovation in der Hochschullehre
  • [Foundation Innovation in Higher Education] for supporting our research, Grant no: : FBM2020-EA-1670-01800. The authors would also like to thank Jane Zagorski for her language support and Maike Hering for her feedback on the manuscript.

    A Appendices

    A.1 Stimulus Material - Example

    Figure A1:
    Date: 08/24/2021
    From: UNICEF
    To: ...
    Subject: Malnutrition in Yemen: Saba has made it
    Good day,
    We would like to inform you about the work we have been able to carry out in Yemen thanks to the support of donors. Here's a brief report: One-year-old Saba's health was dramatically impaired when her mother brought her to the nutrition center that UNICEF supports. She had eaten far too little for weeks. She was severely underweight and weak. Saba urgently needed treatment.
    It's so wonderful to see how she regained her strength thanks to the specially tailored therapeutic nutrition. Thanks to our regular donors, we are able to provide immediate help to children like Saba. She recovered within a few days with this help. If you would like to see how Saba is doing now, click here.
    Your donation saves lives and makes our work possible, and it is urgently needed everywhere. Thank you very much for your regular support.
    Figure A1: Stimulus Material (prosocial personal email), translated

    Supplemental Material

    MP4 File - Video Presentation
    Video Presentation
    Transcript for: Video Presentation

    References

    [1]
    American Psychological Association: Ethical Principles of Psychologists and Code of Conduct (2017). Retrieved from https://www.apa.org/ethics/code/index.
    [2]
    Julia Angwin, Jeff Larson, Surya Mattu, and Lauren Kirchner. 2022. Machine Bias. In Ethics of Data and Analytics. Auerbach Publications, Boca Raton, FL, 254–264. https://doi.org/10.1201/9781003278290-37.
    [3]
    Solon Barocas and Andrew D. Selbst. 2016. Big Data's Disparate Impact. California Law Review 104, 3 (June 2016), 671-732. https://doi.org/10.2139/ssrn.2477899.
    [4]
    C. D. Batson. 2010. Empathy-induced altruistic motivation. In Prosocial motives, emotions, and behavior. The better angels of our nature, Mario Mikulincer and Phillip R. Shaver, Eds. American Psychological Association, Washington, DC, 15–34. https://doi.org/10.1037/12061-001.
    [5]
    Trevor G. Bond and Christine M. Fox. 2007. Applying the Rasch model: Fundamental measurement in the human sciences (2nd. ed). Lawrence Erlbaum Associates Publishers, Mahwah, NJ.
    [6]
    Miles Brundage. 2014. Limitations and risks of machine ethics. Journal of Experimental & Theoretical Artificial Intelligence 26, 3, 355–372. https://doi.org/10.1080/0952813X.2014.895108.
    [7]
    Joy Buolamwini and Timnit Gebru. 2018. Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification. In Proceedings of the 1st. Conference on Fairness, Accountability and Transparency, February 3-24, 2018, New York, NY, 77–91.
    [8]
    Aylin Caliskan, Joanna J. Bryson, and Arvind Narayanan. 2017. Semantics derived automatically from language corpora contain human-like biases. Science 356, 6334, 183–186. https://doi.org/10.1126/science.aal4230.
    [9]
    CharityWatch. 2011. CharityWatch Calls for Resignation of Central Asia Institute's Founder Greg Mortenson (2011). Retrieved from https://www.charitywatch.org/charity-donating-articles/charitywatch-calls-for-resignation-of-central-asia-institute39s-founder-greg-mortenson.
    [10]
    Christopher F. Clark, Matthew J. Kotchen, and Michael R. Moore. 2003. Internal and external influences on pro-environmental behavior: Participation in a green electricity program. J. Environ. Psychol. 23, 237–246. https://doi.org/10.1016/S0272-4944(02)00105-6.
    [11]
    Jacob Cohen. 1992. A power primer. Psychological Bulletin 112, 1, 155-159. https://doi.org/10.1037/0033-2909.112.1.155
    [12]
    Danielle Cosme, Christin Scholz, Hang Y. Chan, Bruce P. Dore, Prateekshit Pandey, Jose C. Tartak, Nicole Cooper, Alexandra Paul, Shannon Burns, and Emily B. Falk. 2023. Message self and social relevance increases intentions to share content: Correlational and causal evidence from six studies. J Exp Psychol Gen. 152, 1(Jan. 2023), 253-267. https://doi.org/10.1037/xge0001270
    [13]
    Samantha J. Dobesh, Tyler Miller, Pax Newman, Yudong Liu, and Yasmine N. Elglaly. 2023. Towards Machine Learning Fairness Education in a Natural Language Processing Course. In Proceedings of the 54th. ACM Technical Symposium on Computer Science Education, March 2023, ACM Inc., New York, NY, 312–318. https://doi.org/10.1145/3545945.3569802.
    [14]
    European Parliament. 2023. EU AI Act: first regulation on artificial intelligence (2023). Retrieved September 5, 2023 from https://www.europarl.europa.eu/news/en/headlines/society/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence.
    [15]
    Batya Friedman and Helen Nissenbaum. 1996. Bias in computer systems. ACM Trans. Inf. Syst. 14, 3, 330–347. https://doi.org/10.1145/230538.230561.
    [16]
    Susanne Göckeritz, P. Wesley Schultz, Tania Rendón, Robert B. Cialdini, Noah J. Goldstein, and Vladas Griskevicius. 2010. Descriptive normative beliefs and conservation behavior: The moderating roles of personal involvement and injunctive normative beliefs. Euro J Social Psych 40, 3, 514–523. https://doi.org/10.1002/ejsp.643.
    [17]
    Ian Goodfellow, Yoshua Bengio, and Aaron Courville. 2016. Deep learning. Adaptive computation and machine learning. The MIT Press, Cambridge, MA.
    [18]
    Mathew D. Hardy, Bill D. Thompson, P. M. Krafft, and Thomas L. Griffiths. 2023. Resampling reduces bias amplification in experimental social networks. Nat Hum Behav, 1–15. https://doi.org/10.1038/s41562-023-01715-5.
    [19]
    Blair T. Johnson and Alice H. Eagly. 1989. Effects of involvement on persuasion: A meta-analysis. Psychological Bulletin 106, 2, 290–314. https://doi.org/10.1037/0033-2909.106.2.290.
    [20]
    Adheesh Kadiresan, Yuvraj Baweja, and Obi Ogbanufe. 2022. Bias in AI-Based Decision-Making. Bridging Human Intelligence and Artificial Intelligence. Springer, Cham, 275–285. https://doi.org/10.1007/978-3-030-84729-6_19.
    [21]
    Naoko Kaida and Kosuke Kaida. 2016. Pro-environmental behavior correlates with present and future subjective well-being. Environ Dev Sustain 18, 1, 111–127. https://doi.org/10.1007/s10668-015-9629-y.
    [22]
    Florian G. Kaiser, Katarzyna Byrka, and Terry Hartig. 2010. Reviving Campbell's paradigm for attitude research. Personality and Social Psychology Review 14, 4, 351–367. https://doi.org/10.1177/1088868310366452.
    [23]
    Florian G. Kaiser, Terry Hartig, Adrian Brügger, and Caroline Duvier. 2013. Environmental Protection and Nature as Distinct Attitudinal Objects. Environment and Behavior 45, 3, 369–398. https://doi.org/10.1177/0013916511422444.
    [24]
    Florian G. Kaiser, Gundula Hübner, and Franz X. Bogner. 2005. Contrasting the Theory of Planned Behavior With the Value-Belief-Norm Model in Explaining Conservation Behavior1. J Appl Social Pyschol 35, 10, 2150–2170. https://doi.org/10.1111/j.1559-1816.2005.tb02213.x.
    [25]
    Florian G. Kaiser and Mark Wilson. 2004. Goal-directed conservation behavior: the specific composition of a general performance. Personality and individual differences 36, 1531–1544. https://doi.org/10.1016/j.paid.2003.06.003.
    [26]
    Emre Kazim, Adriano S. Koshiyama, Airlie Hilliard, and Roseline Polle. 2021. Systematizing Audit in Algorithmic Recruitment. Journal of Intelligence 9, 3. https://doi.org/10.3390/jintelligence9030046.
    [27]
    Jon Kleinberg, Jens Ludwig, Sendhil Mullainathan, and Cass R. Sunstein. 2018. Discrimination in the Age of Algorithms. Journal of Legal Analysis 10, 113–174. https://doi.org/10.1093/jla/laz001.
    [28]
    Jon Kleinberg, Jens Ludwig, Sendhil Mullainathan, and Cass R. Sunstein. 2020. Algorithms as discrimination detectors. Proceedings of the National Academy of Sciences of the United States of America 117, 48, 30096–30100. https://doi.org/10.1073/pnas.1912790117.
    [29]
    Emily R. Lai. 2011. Motivation: A literature review, 6. Pearson Research's Report.
    [30]
    Anja Lambrecht and Catherine Tucker. 2019. Algorithmic bias? An empirical study of apparent gender-based discrimination in the display of STEM career ads. Management Science 65, 2966–2981. https://doi.org/10.1287/mnsc.2018.3093.
    [31]
    Dominic Lammert. 2021. The Connection between the Sustainability Impacts of Software Products and the Role of Software Engineers. In Evaluation and Assessment in Software Engineering. ACM, New York, NY, USA, 294–299. https://doi.org/10.1145/3463274.
    [32]
    Richard N. Landers and Tara S. Behrend. 2022. Auditing the AI auditors: A framework for evaluating fairness and bias in high stakes AI predictive models. American Psychologist. https://doi.org/10.1037/amp0000972.
    [33]
    Florian Lange and Siegfried Dewitte. 2019. Measuring pro-environmental behavior: Review and recommendations. J. Environ. Psychol. 63, 92–100. https://doi.org/10.1016/j.jenvp.2019.04.009.
    [34]
    Dominik J. Leiner. 2019. Too Fast, too Straight, too Weird: Non-Reactive Indicators for Meaningless Data in Internet Surveys. 229-248 Pages / Survey Research Methods, Vol 13 No 3 (2019). SRM 13, 3, 229–248. https://doi.org/10.18148/srm/2019.v13i3.7403.
    [35]
    Michael R. Leippe and Roger A. Elkin. 1987. When motives clash: Issue involvement and response involvement as determinants of persuasion. J Pers Soc Psychol 52, 2, 269–278. https://doi.org/10.1037/0022-3514.52.2.269.
    [36]
    Lydia T. Liu, Sarah Dean, Esther Rolf, Max Simchowitz, and Moritz Hardt. 2018. Delayed Impact of Fair Machine Learning. International Conference on Machine Learning, 3150–3158.
    [37]
    Ninareh Mehrabi, Fred Morstatter, Nripsuta Saxena, Kristina Lerman, and Aram Galstyan. 2022. A Survey on Bias and Fairness in Machine Learning. ACM Comput. Surv. 54, 6, 1–35. https://doi.org/10.1145/3457607.
    [38]
    Merriam-Webster. 2017. Spam. Retrieved from https://www.merriam-webster.com/dictionary/spam.
    [39]
    Alexander Neaman, Siegmar Otto, and Eli Vinokur. 2018. Toward an Integrated Approach to Environmental and Prosocial Education. Sustainability 10, 1–11. https://doi.org/10.3390/su10030583
    [40]
    Alexander Neaman, Pamela Pensini, Sarah Zabel, Siegmar Otto, Dmitry S. Ermakov, Elvira A. Dovletyarova, Elliot Burnham, Mónica Castro, and Claudia Navarro-Villarroel. 2022. The Prosocial Driver of Ecological Behavior: The Need for an Integrated Approach to Prosocial and Environmental Education. Sustainability 14, 7, 4202. https://doi.org/10.3390/su14074202.
    [41]
    Safiya U. Noble. 2018. Algorithms of oppression. New York University Press, New York, NY.
    [42]
    Alexandra Olteanu, Carlos Castillo, Fernando Diaz, and Emre Kıcıman. 2019. Social Data: Biases, Methodological Pitfalls, and Ethical Boundaries. Frontiers in big data 2, 13. https://doi.org/10.3389/fdata.2019.00013.
    [43]
    Cathy O'Neil. 2017. Weapons of math destruction: How big data increases inequality and threatens democracy. Crown Publishers, New York, NY.
    [44]
    Siegmar Otto, Pamela Pensini, Sarah Zabel, Pablo Diaz-Siefer, Elliot Burnham, Claudia Navarro-Villarroel, and Alexander Neaman. 2021. The prosocial origin of sustainable behavior: A case study in the ecological domain. Global Environmental Change 69, 102312. https://doi.org/10.1016/j.gloenvcha.2021.102312.
    [45]
    Nelly Oudshoorn, Els Rommes, and Marcelle Stienstra. 2004. Configuring the user as everybody: Gender and design cultures in information and communication technologies. Science, Technology, and Human Values 29, 30–63. https://doi.org/10.1177/0162243903259190.
    [46]
    Shola Oyedeji, Ahmed Seffah, and Birgit Penzenstadler. 2018. A Catalogue Supporting Software Sustainability Design. Sustainability 10, 7, 2296. https://doi.org/10.3390/su10072296.
    [47]
    J. Philippe Rushton, Roland D. Chrisjohn, and G. Cynthia Fekken. 1981. The altruistic personality and the self-report altruism scale. Personality and Individual Differences 2, 4, 293–302. https://doi.org/10.1016/0191-8869(81)90084-2.
    [48]
    Posit Team. 2023. RStudio: Integrated Development Environment for R, Boston, MA.
    [49]
    Georg Rasch. 1993. Probabilistic models for some intelligence and attainment tests. Mesa Press, Chicago.
    [50]
    Stuart Russell, Daniel Dewey, and Max Tegmark. 2015. Research Priorities for Robust and Beneficial Artificial Intelligence. AIMag 36, 4, 105–114. https://doi.org/10.1609/aimag.v36i4.2577.
    [51]
    Christin Scholz, Mia Jovanova, Elisa C. Baek, and Emily B. Falk. 2020. Media content sharing as a value-based decision. Current Opinion in Psychology 31, 83–88. https://doi.org/10.1016/j.copsyc.2019.08.004.
    [52]
    Jieun Shin and Kjerstin Thorson. 2017. Partisan Selective Sharing: The Biased Diffusion of Fact-Checking Messages on Social Media. J Commun 67, 2, 233–255. https://doi.org/10.1111/jcom.12284.
    [53]
    Wolfgang Steinel and Carsten K. W. de Dreu. 2004. Social motives and strategic misrepresentation in social decision making. J Pers Soc Psychol 86, 3, 419–434. https://doi.org/10.1037/0022-3514.86.3.419.
    [54]
    Stephane Champely, Claus Ekstrom, Peter Dalgaard, Jeffrey Gill, Stephan Weibelzahl, Aditya Anandkumar, Clay Ford, Robert Volcic, and Helios De Rosario. 2017. pwr: Basic functions for power analysis.
    [55]
    Harini Suresh and John Guttag. 2021. A Framework for Understanding Sources of Harm throughout the Machine Learning Life Cycle. In Equity and Access in Algorithms, Mechanisms, and Optimization. ACM, New York, NY, USA, 1–9. https://doi.org/10.1145/3465416.3483305.
    [56]
    César Tapia-Fonllem, Victor Corral-Verdugo, Blanca Fraijo-Sing, and Maria Durón-Ramos. 2013. Assessing Sustainable Behavior and its Correlates: A Measure of Pro-Ecological, Frugal, Altruistic and Equitable Actions. Sustainability 5, 711–723. https://doi.org/10.3390/su5020711
    [57]
    Oliver Taube, Alexandra Kibbe, Max Vetter, Maximilian Adler, and Florian G. Kaiser. 2018. Applying the Campbell Paradigm to sustainable travel behavior: Compensatory effects of environmental attitude and the transportation environment. Transportation Research Part F: Traffic Psychology and Behaviour 56, 392–407. https://doi.org/10.1016/j.trf.2018.05.006.
    [58]
    Ibo van de Poel. 2020. Embedding Values in Artificial Intelligence (AI) Systems. Minds and Machines 30, 3, 385–409. https://doi.org/10.1007/s11023-020-09537-4.
    [59]
    Wendell Wallach and Colin Allen. 2009. Moral machines: Teaching robots right from wrong. Oxford University Press, New York, N.Y.
    [60]
    Sarah Zabel and Siegmar Otto. 2021. Bias in, bias out – The similarity-attraction effect between chatbot designers and users. Lecture Notes in Computer Science. International Conference on Human-Computer Interaction., 184–197. https://doi.org/10.1007/978-3-030-78468-3_13.
    [61]
    Sarah Zabel, Michael P. Schlaile, and Siegmar Otto. 2023. Breaking the chain with individual gain? Investigating the moral intensity of COVID-19 digital contact tracing. Computers in Human Behavior 143, 107699. https://doi.org/10.1016/j.chb.2023.107699.

    Cited By

    View all
    • (2024)Drawing the full picture on diverging findings: adjusting the view on the perception of art created by artificial intelligenceAI & SOCIETY10.1007/s00146-024-02020-zOnline publication date: 16-Aug-2024

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    CHI '24: Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems
    May 2024
    18961 pages
    ISBN:9798400703300
    DOI:10.1145/3613904
    This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives International 4.0 License.

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 11 May 2024

    Check for updates

    Author Tags

    1. Empirical study that tells us about people
    2. Humanities
    3. Quantitative Methods
    4. Sustainability

    Qualifiers

    • Research-article
    • Research
    • Refereed limited

    Funding Sources

    • Stiftung Innovation in der Hochschullehre

    Conference

    CHI '24

    Acceptance Rates

    Overall Acceptance Rate 6,199 of 26,314 submissions, 24%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)435
    • Downloads (Last 6 weeks)131
    Reflects downloads up to 15 Oct 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)Drawing the full picture on diverging findings: adjusting the view on the perception of art created by artificial intelligenceAI & SOCIETY10.1007/s00146-024-02020-zOnline publication date: 16-Aug-2024

    View Options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    HTML Format

    View this article in HTML Format.

    HTML Format

    Get Access

    Login options

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media