Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
survey
Open access

Building Confidence in a World of Eroding Trust

Published: 19 June 2024 Publication History

Abstract

Trust is an important pillar in digital transformation. It is the key to fostering effective communications and collaborations. It is the foundational element that exists in the interactions between humans and technologies. However, the rapid innovation of technologies has accelerated their widespread exploitation, resulting in trust relationships among people and technologies becoming increasingly complex. In this age of eroding trust, exploring how to steer away from this trend towards sustaining and enhancing trust becomes critical. Through our review of the varying definitions and applications of trust in existing literature, we believe that there is another perspective to address the challenges of trust. In our article, we propose a trust pyramid to investigate the causality factors that influence the decision to trust. We classify the trust pyramid into three classifications (Foundational, Supplemental, and Innovative) according to the order of importance and deliberations. Within each classification, we also propose the causality factors that serve as key trust elements to influence any decision-making. We identify key challenges (e.g., the need for ethical technologies, regulations to keep pace with technological advancement) and address them by proposing viable initiatives that stakeholders can consider to build trust in a systematic manner.

1 Introduction

The Internet-of-Everything (IoE) era has taken center stage in the merging of the virtual and physical worlds in which people, processes, data, and things are closely connected. These connections are not just restricted to computers or mobile devices as we usually know them; they also extend to industrial machinery, vehicles, and other systems with networking capabilities. As the world becomes more dependent on technology, the opportunities for exploitation also increase. This can affect the level of digital trust negatively as a whole.
The adverse impacts of a digitalized world can be seen in several areas. Attacks on supply chain industries have caused widespread disruptions to operations and unavailability to resources [49]. Vulnerabilities affecting software supply chains have compromised software packages and applications, and attacks were able to exfiltrate sensitive information [1, 2], among other impacts. In the area of artificial intelligence (AI), an investigation conducted by McAfee has found that AI systems in autonomous vehicles could be hacked [3]. The algorithms that define the parameters of the vehicle systems were manipulated and caused misinterpretation of the information. This could lead to safety hazards and deadly incidents. On the impacts of digital content and privacy, fake content is becoming prominent. It is becoming increasingly difficult to distinguish between legitimate and nefarious content. From political agendas to monetary frauds, fake content poses a great risk due to uncertainties caused by the lack of data authenticity [4, 5].
The impacts on trust can also be felt from these malicious attacks as shown in Figure 1. First, it reduces the confidence in people to trust government entities, service providers, and suppliers to protect and secure sensitive data [6, 83, 84, 87]. Second, it undermines the trust in network-enabled devices and systems [8587] as users may be concerned with potential loss of information or safety issues. Last, it creates potential apprehension and fear towards the usage of technologies as people become more cautious and wary of these dangers [88, 89]. These impacts affect trust in an adverse manner and can hinder the progression of technology. In the Edelman Trust Barometer report [6], a decline in trust levels was reported across many technological subsectors (e.g., Internet-of-Things, 5G, AI) in 25 of the 27 countries that participated in the survey. In addition to the technological front, the trust in government bodies has diminished. This was further underscored by the effects of undesirable cyber activities.
Fig. 1.
Fig. 1. Impacts of eroding trust.
As trust remains an important foothold in all interactions and transactions, it is important to understand a key feature of it. Trust is easy to lose but difficult to build. It can take many years to build a good reputation and even more effort to regain the trust if it is lost. This can have long-lasting effects to the reputation of organizations and confidence of the public. Against this backdrop of eroding trust, the importance of improving trust levels becomes more critical in the world today. In our work, we aim to propose practical initiatives that stakeholders can perform to curb the deteriorating confidence that has influenced the ecosystem. In Section 2, we will discuss the existing works that strive to tackle trust issues in different manners. Section 3 comprises our proposed trust pyramid in which we categorized the different trust elements and highlighted some challenges. Our proposed initiatives to increase trust are discussed in Section 4. We conclude the paper in Section 5.

2 Existing Literature and Works

There are numerous and varying definitions of trust in existing literature, many of which stem from the social-psychological viewpoint towards trust in other individuals or groups of people. The author of [62] defined trust as the “expectancy held by an individual or a group that the word, promise, verbal or written statement of another individual or group can be relied upon”. The authors in [63] defined trust as the “confident positive expectations regarding another's conduct”. The definitions in [62, 63] focus on the expectations of the trustee to be able to perform the right action for the trustor. The authors of [64] defined trust on the basis of the “willingness to be vulnerable” as a key aspect. This creates an additional lens to consider for the definition of trust, which is the degree of risk that one is willing to accept and be vulnerable before trusting another individual or group. The author of [65] provided a definition of trust in which “trust is a bet about the future contingent actions of others”. Similar to the definition used in [64], it connotates the behavior and mindset of risk taking. The authors of [66] opined that “Trust is a psychological state comprising the intention to accept vulnerability based upon positive expectations of the intentions of behavior of another”, which encompasses the essence of the trust definitions highlighted in [6265].
It is not easy to capture all aspects of trust within a short definition. As described in [66, 67], it is extremely challenging to achieve a common consensus on a trust definition despite multiple attempts by researchers. As such, the authors of [67] chose to elaborate on the different facets of trust. They agreed that trust is a psychological state that has correlations to the expectations of another and the willingness to be vulnerable. Next, they mentioned that trust is relational in that the relationship involves the interaction with the trustee. If the ‘expectations of others’ and the ‘willingness to be vulnerable’ are considered and inclined towards self-presented concerns, the relational aspect dwells in the effect on how the interaction with the other party can affect the trust level of the trustor (e.g., exacerbate existing concerns, improve confidence). Last, the authors stated that trust is a choice, which highlights the final course of action of deciding to trust or not to trust. The authors of [66] also described a similar facet to relational trust but termed it interdependence. It is a necessary condition whereby the interest of a person or a group can only be achieved upon the reliance of another. This signifies that a trust relationship must involve two parties minimally. The authors of [68] expanded on their own work [64] to include new dimensions of trust to address the evolution of perspectives in trust. Some of these dimensions include context-specific scenarios (e.g., supervisor may be perceived to have greater authority and thus willing to take greater risks in choices as compared with a subordinate), cultural implications (e.g., task-oriented culture and relationship-oriented culture can have differing initial trust before executing a task), and emotions (e.g., may cause a trustor to take unwarranted risks).
With the proliferation of technologies and digital applications, the domain of trust has expanded beyond the realm of humans. In addition to the traditional definitions of interpersonal trust, researchers and organizations are building upon the definitions to address digital trust as well. The World Economic Forum defines digital trust as “individuals’ expectation that digital technologies and services – and the organizations providing them – will protect all stakeholders’ interests and uphold societal expectations and values” [69]. ISACA, a global professional membership organization, defines digital trust as “the confidence in the integrity of the relationship, interactions and transactions among providers and consumers within an associated digital ecosystem” [70]. The authors of [71] refers to digital trust as the “relationship between a person and an autonomous intellectual agent that exists in a digital environment". In [72], digital trust is defined as “a trust based either on past experience or evidence that an entity has behaved and/or will behave in accordance with the self-stated behavior”. Through the development of trust relationships over time, we have seen how the definition of trust has evolved from interpersonal trust between humans and expanding to include trust towards technologies, services, organizations, and businesses as well.
One important essence of trust is that that the trustees are expected to perform certain behaviors. When these behaviors deviate, it can create mistrust. Mistrust can cause discomfort when users are not assured that the engaged services or acquired products can provide the perceived benefits. In a study conducted by Visa [7], consumers are becoming more distrusting on the way their data is managed. As such, there is a higher desire from individuals to have greater ownership and empowerment to control the usage of data. In another study [8], the US House of Representatives Judiciary Committee found out that the distrust in big tech companies is increasing. One of the main concerns raised was the problem of advancement in technology at the expense of user privacy. In the area of generative AI technologies, there are research works that discuss the issue of transparency of AI models, the lack of explanation of the models’ outputs, and the challenges of hallucination [7375]. Without understanding how the AI model processes information and how it arrives at a generated output, users cannot be assured that the model is performing as intended for the right reason and is producing the right output. If humans, organizations, and businesses do not use technologies as intended or that the technologies and applications do not perform as expected, it can lead to mistrust and cause serious consequences in certain scenarios (e.g., the criminal justice system [73]).
There are many existing works that address the topic of trust and are aimed at shaping the level of trust for the better. They provide extensive trust knowledge and facets that are important for all entities to consider towards elevating the confidence in all trust relationships. For example, Rachel Botsman, a leading expert on trust topics, discussed key insights in her book on how technologies can shape the trust culture in the modern world [9]. She highlighted key principles on how to encourage humans to trust new ideas and platforms that are practiced by some businesses today. She discussed the limit to the development of technologies that may be beneficial towards sustaining trust levels (e.g., how far/which technologies should be fully automated). She also discussed the importance of shared responsibilities between stakeholders and how business practices, government policies and user behaviors all play important roles in fostering the trust environment. In [59], the study provided some insights related to social trust in the COVID-19 situation. The authors found that strong government support is an important factor in enhancing trust levels between the members of society and strengthening response measures. The form of support can be proactively supplying information and products to combat the virus, delivering encouraging messages to adopt necessary measures or collaborating with members to minimize the risk of viral infection. In [10], the authors proposed a comparison of different trust modeling architectures in the Internet-of-Things (IoT) environment. The trust architectures serve as platforms to contain and compute trust values of IoT environments, measuring their behaviors, reputations, and accuracies. The author further discussed the advantages and limitations of the architectures. For example, a distributed architecture is divided into 3 layers (Things, Fog, and Cloud layers) and each layer has its own computational capacity and security considerations (e.g., limited computation resources in Things). Centralized architectures integrate all the layers but may introduce a single point of failure. Through these comparisons, the author hoped to advance the development of measuring trust in IoT environments and ecosystems.
Some works proposed methods and frameworks to evaluate the degree of trust in varying scenarios and environments. The authors of [11] proposed a set of factors and broad checklist for users to evaluate humans’ trust in different types of relationships. Some examples include the trust from people to people and the trust from things to people. The authors used these sets of relationships to formulate the different parameters in an evaluation rubric. The rubric consists of five aspects (i.e., Security, Comprehensiveness, Usability, Functionality, Robustness). In Security, the requirements focus on defending against threats and attacks that may compromise the trust. In Comprehensiveness, the requirements focus on an adaptive model that can scale with evolving technological trends, can be contextually suitable, and can accurately depict the level of trust. Usability considers the computational resources, usability of data, and the applicability of trust models in different networks. In Functionality, the requirements focus on a decision-making framework on how trust values can affect the level of access. Last, Robustness focuses on requirements regarding the availability of management systems during network disturbances. Figure 2 shows a table of the rubrics that were formulated. In another work [12], the authors proposed a conceptual trust framework to quantify the measurement of digital trust in the workplace. This is done through the mapping of the confidence level in three aspects (i.e., people, technology, and process) and the identification of the drivers impacting these aspects. A descriptive assessment report is produced at the end to demonstrate the correlation between the drivers and the confidence level. The framework aims to evaluate the level of digital trust and serves as a platform to facilitate further studies in the generation of an assessment tool for quantifying digital trust. In [60], the authors proposed a trust computational model to predict the trustworthiness of IoT services. Aided by a novel machine learning algorithm, the model extracts a variety of features as metrics to evaluate the trustworthiness. Examples of such features include the co-location relationship, collaboration frequency and duration of the interactions, and the feedback model to assess the historical experience between the interactions.
Fig. 2.
Fig. 2. Table of Rubrics in [11].
There are also other survey works that compare various trust models applicable to IoT environments, enterprise information systems, and social networks. The authors of [76, 77] discussed trust models for IoT paradigms and offered some insights on common characteristics of trust and the types of classification that are used in them. Some of the common characteristics are that trust is context dependent (i.e., only relevant information is processed), asymmetric (i.e., trust does not apply in both directions between two entities), and not perfect (i.e., no 100% types of trust). Examples of classification include methods of measuring trust, types of trust to be measured, and the source of trust interactions. In [78], the authors identified three categories to classify trust (i.e., credential-based, reputation-based, and hybrid). Credentials may refer to testimonials or certifications to demonstrate the qualification of service while reputation comprises cumulative knowledge of past behavior and performance belonging to the service or product providers. The authors of [79] discussed three aspects of social trust: information collection, evaluation, and dissemination. These three categories are highly relevant in social networks in terms of the type of information collected, the types of techniques that are used to evaluate trust levels, and the methods to disseminate information. Figure 3 shows the classification architecture used in [79].
Fig. 3.
Fig. 3. Trust classification architecture in [79].
These works play a crucial role in helping researchers to consider and improve the design of trust models that can address as many aspects as possible. They provide excellent insights to shape the trust cultures in our world and provide continuous improvement in the design of trust models to meet the demands in the evolving landscape. We believe that there are gaps that our article can fill further. First, many of the works have a key focus on interpersonal trust [63, 66, 67] or digital trust [7378], whereas some works discuss both aspects [65, 79]. Some of these works are also focused on certain environments such as AI [7375], IoT [76, 77], and social networks [79]. It is important to embrace both traditional and newer definitions of trust to capture the vastness of trust considerations. Second, most works focus on the properties of trust relationships and the different classifications of measuring trust [11, 12, 60, 6668, 74, 7679, 87] but not on the root causality factors that influence the decision to trust. We aim to provide another viewpoint on improving trust by looking at key root elements that affect this causality to influence the decision. We aim to propose a trust model that is agnostic to the coverage bounded by the various trust definitions, any specific types of interactions between humans and/or technologies, and applicable to any type of environment. We also aim to propose initiatives that we feel may be beneficial in understanding some possible courses of action to address the challenges in trust.

3 Proposed Trust Pyramid in the Digital Society

Before we propose the initiatives, it is important to develop a flow on the trust classifications and elements in the consideration for elevating trust. This flow helps to provide a stepwise method for stakeholders to assess each classification and element to consider the prioritization in a careful manner. Once the flow is developed, it can be utilized to formulate actions that stakeholders can perform. We present this flow in the form of a trust pyramid shown in Figure 4, which is relevant to the current digital age and context. In general, there are three types of trust that we classify: Foundational Trust, Supplemental Trust, and Innovative Trust. The model adopts a hybrid form of hierarchy in which we consider the classification at the bottom first before flowing upwards, while the elements within each classification can be considered in parallel. We believe that this hybrid manner of systemization provides an ordered pathway for users to consider the needs pertaining to trust without the need to conform all the elements to a strictly ordered sequence. Maslow's Hierarchy of Needs [58] posits that an individual is more motivated to satisfy the physiological needs as the most fundamental layer before progressing to the next motivation layer. Drawing inspiration from the design of the model and applying it to trust, if the needs belonging to the first classification are not satisfied, it may be difficult to consider the next classification of trust elements. Differing from Maslow's model, our proposed model does not apply only to individuals as a unit of analysis. Instead, it is designed to be agnostic (i.e., applicable to individuals, groups of individuals, organizations, businesses, technological products/services designed by any of them). For our proposed trust pyramid, we will discuss the need for each classification and the elements within. We will also investigate some issues that the community may face for them.
Fig. 4.
Fig. 4. Proposed trust pyramid in the digital society.

(1) Foundational Trust.

Foundational trust refers to the group of elements that establish the bedrock of a trust relationship and are integral in shaping the first impression of a person. Generally, there are no trade-offs if these trust elements are compromised. Within the existing literature, many of the authors agree that an important characteristic of a trust relationship is that it is context dependent or driven by a specific motivation. Otherwise, the basis of the trust relationship would be irrelevant. We use the term Motivation to broadly represent this important element of contextual dependency. In [80], the authors mentioned important representative dimensions of trustworthiness for digital trust that are mentioned by various works. Some of these include cybersecurity, safety, privacy, fairness, and transparency. These dimensions can shape the core decisions of whether the user is willing to be vulnerable and decide to trust another party. We use the term Security to broadly represent the element that has security or safety implications and the term Integrity to represent the element that relates to privacy, fairness, and transparency-related concerns.
Motivation: Before any trust relationship can begin, its purpose must be ascertained. This element is the context of a trust relationship. It depends on how a person or a product can perform an action that is tasked to do and not perform an action that it is prohibited to do. For example, we would trust a cybersecurity specialist to harden security in our products but not to provide financial advice. Similarly, we would trust a portable hard disk to store our data but not to have any recording capabilities to capture audio or video content. Any role, technology, or service has its own specific duties that it was designed for. Unfortunately, there are some incidents in which trust has been compromised. In an age in which information is widely available, we have seen a case of deepfake that can cause social and political unrest. Even though it was debunked through experts’ analysis and reviews, the brief appearance of the deepfake video on social media platforms and national television can cast doubt in future authentic media [13]. In another situation, we could see a medical social platform being affected by misinformation [15]. Inaccurate digital content has been published and caused unrest within the medical community. This has also led to questions regarding the effectiveness of content moderation by the platform on which it was published, where it is supposed to generate highly reliable and credible content. Any misinformation could twist the original intention of the media platform.
Security: Another important element in foundational trust is the element of security. This may apply to physical security to users, online harm (e.g., emotional, psychological) or cyber-security of digital products/services. Regardless of the technologies that are used in products or services, they should be well designed and built to protect the safety of the users in all situations. Even as technologies progress to a stage at which they can reduce the number of incidents, there can be no compromise in this aspect and should always be the priority in all functions. Airbnb and Uber are companies that thrive on peer-to-peer reviews, in which the hosts and customers can provide valuable feedback to one another. They also depend on a model in which strangers are willing to trust one another to utilize the service. One existing challenge of this model is that unsafe incidents can still happen regularly because the autonomy of the model relies on the willingness of strangers to mutually accept the interaction. Airbnb and Uber encounter many assault cases throughout their operations and potentially even more incidents that were unannounced [16, 17]. To protect their reputations, some clients were negotiated into accepting compensation and signed non-disclosure agreements to refrain from discussing the details. This arrangement conceals the extent of precarious situations occurring, which could lead to even more distrust. Nevertheless, the collective power of good reviews and remote chances of facing dangerous situations wore down the trust barriers, as people continue to make use of the services despite the risks.
Integrity: The topics of building ethical technologies and incorporating ethics in design considerations have been gaining traction. Many organizations understand the need to provide trustworthy products and services and that challenges that may erode fundamental human values need to be addressed to prevent adverse ramifications. The World Economic Forum released a white paper [18] discussing the topic of ethics by design and encouraging organizations to use technology responsibly. The paper suggested three design principles for promoting ethical behaviors, and highlighted some undesirable consequences if there is inadequate attention to the principles. Business and education leaders are recognizing the significance of the role of ethics in technology and the trust value that they bring in tandem with the technical benefits that most businesses tend to focus on [19, 20]. There are many design challenges when it comes to the embodiment of ethics (e.g., AI bias and privacy concerns), and it may be difficult to overcome all of them at the same time. When Microsoft released an AI chatbot as an experiment to better learn the art of conversational language, mischievous actors soon learned that they could influence the bot's responses in an inappropriate manner. The bots were subjected to racial slurs, misogynistic comments, and sentiments that could fuel a hate agenda [21]. Even though the chatbot was an experimental project, it has raised concerns regarding the progress of AI capabilities without tainting them with biases and inappropriate prejudices. Surveillance technologies are also facing criticisms in the face of concerns pertaining to individual privacy. The incident caused by the information disclosure of Edward Snowden has prompted a cultural discussion on surveillance and privacy concerns. Increasingly, operational technology (OT) equipment is designed to collect multiple types of data, which has only exacerbated the existing concerns [50]. The author of [22] sought to further provide new perspectives on the ethical stakes when any form of mass surveillance is in place. Nonetheless, he noted that it is difficult to justify the conduct of surveillance and that there is a need for a comprehensive assessment of the objectives and implications for such activities.

(2) Supplemental Trust.

Supplemental trust refers to the group of elements that can be considered as second order to the elements in the foundational trust classification to further improve the trust levels that have been established. The supplemental trust classification is similar to the psychological needs in Maslow's model whereas the foundational trust classification is similar to the basic needs in Maslow's model. These trust elements are supplementary and are still important for consideration to those in foundational trust classification but less crucial. As compared with safety or privacy concerns of a product/service, a user's familiarity or ownership over a product/service may affect the trust to a lesser extent and of lesser importance. Likewise, reputation can be built over time and does not outweigh the importance of safety in the same regard. Therefore, we use the terms Familiarity, Ownership, and Reputation to represent the elements within this classification.
Familiarity: Technological advancement often contains a design combination of familiar components and novel components. Apple adopted this concept heavily during the initial designs of their products. This concept is called skeuomorphism [23]. To explain it briefly in terms of interface design, skeuomorphism refers to having a design imitate the form of an object in real life. The main purpose of adopting this design concept is to enable users to make quick connections between the functions and the designs, and intuitively understand its applications. By drawing on these familiarities, the trust barriers are lowered as users have a lesser number of unknown perceptions. Rachel Botsman also echoes this concept of using familiarity to introduce new technologies [9, 24], emphasizing the point where the technology is novel but not wholly new. The concept of familiarity is also echoed in [80], but it is applied in the context of human relationships. People who are perceived to be more similar are associated with having a greater understanding between them. This can lead them to have a greater inclination to trust one another and form closer relationships. However, one must be cautious when using familiar components in designing capabilities. One example is the area of code reuse, in which unresolved existing vulnerabilities can lead to unwanted effects. The authors of [25] explored the period between software release and the discovery of the first vulnerability as a function of familiarity with the system. Through the examination of a software vulnerability lifecycle, they concluded that software reuse could contribute to a significant number of new vulnerabilities and pose unexpected security challenges even though the technologies may be mature.
Ownership: Our trust in and dependency on machines have been evident for a long time. From mobile phones to digital watches, we carry an array of connectivity-enabled devices and make use of them in our daily lives. A study by Statista [26] estimated that a person will have more than three devices globally by 2023, and this number continues to grow. It is unquestionable that these technological instruments are well integrated in our life such that it is almost impossible to exclude them. With the ownership of these devices at our disposal, we also feel empowered and trust that they can serve our needs as and when we require them to. This is due to the control that we have over such devices — we can ask them to execute any action according to our wishes. If they do not malfunction or fail, our faith in them continues to increase each day. One of the challenges that arise out of this deep dependency is that humans may become more desensitized towards indicators of malfunction technologies. The author of [65] highlighted the aspect that if users have a certain control and ownership over the predictability of the outcomes that are caused by natural occurrences, there can be a greater perceived level of trust. In [27], the pilots in an aircraft encountered an unfamiliar circumstance and the evidence suggested that they could not comprehend the situation that resulted in a fatal accident. Analysts have pointed out that the over-reliance on the aircraft systems was partly the reason for the accident. The author of [28] also highlighted the dangers of automation complacency in a medical setting, in which the practitioners tend to favor the information provided by the technologies. These dangers are aggravated especially when the system is highly reliable. Another incident that was caused by automation complacency is a fatal accident in a Tesla vehicle, as the car did not analyze the conditions accurately to perform the right action [29, 30]. It was a tragic loss, but some analysts argued that it might have been avoidable if the driver had stayed alert. From these incidents, we observed that tragedies can still happen despite overriding mechanisms in place. These examples have shown that increased misplaced trust in automated abilities could lead to mishaps and, as such, calls for the right balance between the level of trust and control.
Reputation: Trust often goes together with the reputation of companies, people, or environments. With a history of quality products and services, users are more inclined to put their faith in engaging reputable organizations. Silicon Valley is favored by many as an ideal destination for building startup organizations. The abundance of infrastructure, manpower resources, and collaboration activities foster partnership opportunities for the resident companies and accelerate the growth of businesses [31]. From an innovation standpoint, the authors of [32] concluded in their study that there is a positive relationship between the technological reputations of organizations and the number of innovative solutions. They also noted that increased intensities of marketing efforts may not translate to higher reputations. In another study [33], the authors investigated the factors that can affect the trust levels and reputations of companies via online reviews. Their research findings showed that the number of online reviews and the granularity of the content could affect the measure of trust. Generally, trust levels are higher with a larger number of reviews and higher granularity of details. However, online reviews can be manipulated to distort the feedback authenticity and integrity of products and services. It is reported by the world's leading e-commerce sites that an average of 4% of all online reviews are fake [34]. Fraudulent positive reviews can increase business revenues substantially, giving them more credit than they deserve. Similarly, ill-intentioned actors and rivals can also contribute fraudulent negative reviews to influence the behaviors of online consumers in the other direction.

(3) Innovative Trust.

Innovative trust refers to the group of elements that seeks to maximize trust through innovative ideas, technologies, or solutions. Intentional innovation and strategies are required to enable users to trust more easily by making solutions more secured, promoting inclusivity, and giving them the experience of a valued customer [81]. With the increased reliance on smart devices and the awareness of cyber risks, consumers tend to favor the latest technologies and products for better security and reliability [82]. As such, innovation is an important element to drive the trust level to greater heights. We use the term Novelty to represent the trust element within this classification.
Novelty: Beyond the offerings of products and services, most big companies also spend considerable research and development (R&D) investment to stay relevant in their fields. Technological growth can bring about economic progress and health benefits, and can nurture a sustainable ecosystem. There are reports in which companies are focusing their research in areas such as AI and machine learning, digitalization technologies, and sustainable capabilities [35, 36]. In [37], we can see the significant investment being channeled to R&D areas in numerous global companies. Some studies have shown that the continued R&D investment and intensity can have significant and positive impacts on technological innovation and business outcomes [38, 39]. Businesses that failed to innovate and adjust according to trends were phased out in favor of better and more popular alternatives. For example, Nokia and Blackberry were once huge players in the production of mobile phones. They dominated the market shares before Apple iPhones and Android phones became popular. However, the lack of innovation and adaptability to new trends soon fueled their demise [40, 41]. There is no doubt that ongoing innovation is one of the key ingredients to sustaining businesses and to maintaining consumers’ trust overall. As consumers ride along with the technological waves and trends, it is crucial for businesses to keep pace with progress to retain consumers’ confidence in them.
There are also some challenges that come within this classification. As with any novel technology, one of the topmost concerns pertains to regulations from the authorities [42]. It is difficult to track new technologies that have emerged in the market, and guidelines related to specific technologies often lag far behind technical development. In the area of AI development, guidelines related to AI security are in dire need of updating due to rapid advancements. In an interim report provided by the National Security Commission on Artificial Intelligence, it was highlighted that a very small percentage of AI research goes toward defending AI systems against adversarial efforts [51]. Furthermore, adoption rates of machine learning and AI capabilities are scaling faster than the necessary defenses to protect them [52], and researchers are still discovering new ways to exploit AI technologies [54]. The emergence of generative AI can also cause potential adverse effects to the community. As the author of [59] explains, the dangerous feature of generative AI is that it can produce content that is highly personalized to the user. This can be highly persuasive and alter the trust level between the technology and the user. Even though the technology landscape has been active for more than a decade, specific guidelines were published from 2020 only [43]. Nonetheless, the development of AI security standards is expediting with more focused efforts. Another challenge that accompanies the novel technologies is the trust in the level of security inherent in them. The risks that new technologies bring are unknown and not explored extensively, and it will be hard for cyber defenders to stay ahead of adversaries. For big organizations that are keen to stay ahead of competition with brand new innovations, their reputations could be at stake if they are not consciously aware and take deliberate and necessary actions.

4 Proposed Initiatives

Beyond the classification of trust elements in the trust pyramid, we are also proposing practical initiatives that stakeholders can consider and implement to oversee the growth of the different trust facets. By looking at each trust classification in the model, we identify some challenges that are relevant to the elements within it and suggest initiatives that may be helpful to address the challenges. The initiatives are identified by examining and refining some existing ideas that have not been widely adopted and that may be able to help address these challenges. The initiatives are proposed and categorized according to the levels in the trust pyramid where they are most appropriate and strategic. To the best of our knowledge, these initiatives are not widely implemented globally. The proposed initiatives are not meant to be complete and exhaustive, but rather serve as catalysts and consideration for implementation and further insights to any possible pathways to improve trust. It would also require continuous efforts to explore innovative ideas to keep pace with the changing trust landscape. A summary of the initiatives is presented in Table 1.
Table 1.
TYPES OF TRUSTCHALLENGESPROPOSED INITIATIVES
Foundational Trust
[Security, Motivation, Integrity]
- Ongoing challenges to ensure safety for business models that are built on physical trust between strangers
- The rising need to design ethical technologies that are not discriminative
- Provision of visible alerts to leverage community support for safety
- Third-party inspection/certification in ethical features
Supplemental Trust
[Familiarity,
Ownership, Reputation]
- Potential complacency caused by heavy reliance on fully automated technologies
- Inaccurate reflection of reputations caused by misinformation and fraudulent reviews
- Guidelines for the retention of human interaction designs in fully automated systems
- Development and adoption of a global reputation assessment for critical products and services
Innovative Trust
[Novelty]
- Challenging for regulations to keep pace with technological advancement- Encourage each organization to provide the public community a legitimate avenue and framework for vulnerability reporting
Table 1. Summary of Proposed Initiatives

A. Initiatives Supporting Foundational Trust

As mentioned in Section 2, elements in foundational trust classification are critical as a baseline. They can directly affect the security of the people and cloud their judgement when placing their trust in the future. It is of paramount importance to protect this foundation and minimize any negative impacts.

(1) Provision of Visible Alerts to Leverage Community Support for Safety and Security.

For business services that leverage strangers’ trust towards one another (e.g., ride-hailing services), the risk of safety being compromised is a challenge to both hosts and clients. Thus, it is important to explore solutions that can help to address the safety and security concerns of all parties. The authors of [44] proposed a few solutions to enhance the safety in ride-hailing services. These solutions include sending a distress alarm via the ride-sharing application to notify law enforcers or the company and incorporating a social media plug-in to show the live transmission of the ride so that the community can help to keep track of the rider's safety. Grab has progressively rolled out the AudioProtect function in a South-East Asian country, where both drivers and passengers have the option to enable audio recording during rides [61]. This solution can be potentially implemented across all the countries that it is operating in. The proposed solutions are useful and may provide a form of deterrence against malicious intentions. However, there are some potential undesirable effects accompanying these solutions that remain. First, there is a lag time between the trigger time of distress and the arrival of assistance by law enforcement. Additionally, there might be privacy concerns of data tracking when live transmission and audio recordings are involved. This may also require the involvement and guidance of regulations on whether the implementations should be voluntary or mandatory. Nonetheless, in the face of adverse situations in which safety is prioritized, we believe that it may be more useful when technologies and systems could be designed to enable public assistance in some form to elevate the level of assistance. One possible initiative that technologists and authorities can adopt is the provision of real-time visible alerts within the vicinity of the incidents. For example, cars approved for ride-sharing may be required to install a light signal that both riders and drivers can activate to indicate distress. This signal is visible to people who are nearby, and who may then render their assistance or help to keep track of the movement of the victims until the arrival of law enforcement. This initiative may be also applicable to Airbnb houses whenever homeowners or guests face any dangerous situations.
One important consideration of this initiative regards resources. If the public community is enabled to help trigger alerts, it would be extremely important for emergency services to optimize resource allocation and dispatch, while being able to discern between legitimate and fake alerts. For example, emergency service agencies may need to leverage AI-based detection and analytical technologies to assist in processing multiple data sources and formats (e.g., media and textual information uploaded by a public member, surveillance systems within the reported incident), cross-checking them against one another for validity and accuracy to detect fake alerts.

(2) Third-party Inspection/certification in Ethical Features.

The call for technological trust and the increasing technological integration in our lives have pushed technologists to drive the design of ethical technologies. As the author describes in [45], we are often too familiar with the pattern and cycle pertaining to the introduction of new technology, the rapid integration into our lives and the problems that surfaced after using it. The prominence of AI technologies has raised some concerns regarding its advancement, which has led to more leaders advocating for the need to design ethics in technologies. In [46], some AI ethical principles were highlighted to emphasize the need to protect the fundamental rights of humans (e.g., data privacy, transparency, fairness) to build a better and more sustainable society. Business models may adopt the practice of collecting as much data as possible, but this may no longer be appropriate. Many products undergo certification for their technical and security features. In the future, there may also be ethical considerations; one possible initiative is the certification of ethical features. Such measured features could include the biases of identity recognition capabilities, level of public disclosure to the types of data collected and usage purpose, and the negative environmental/societal impact towards building the technology. This may also mean that there is a need to align on some evaluation criteria such as the features to be certified, the metrics to be assessed for evaluation, and the level of certification outcome. With the ongoing development of AI regulatory guidelines and standards (e.g., the EU AI Act [90]), the third-party inspection/certification bodies may play a more crucial role in helping assess the trustworthiness of AI systems. Increasingly, regulatory bodies may mandate certain levels of compliance of technologies in order to achieve a degree of confidence in them. Moving forward, technology may not be assessed based solely on its technical prowess, but rather also on its impact on society and the environment.

B. Initiatives Supporting Supplemental Trust

The elements in supplemental trust complement those in the foundational class, aiming to improve the overall trust level. There are a couple of initiatives that technologists and authorities can adopt to address the challenges.

(1) Guidelines for the Retention of Human Interaction Designs in Fully Automated Systems.

The benefits of fully automated systems are undeniable. From the reduction in processing time and speed to the increased accuracy of machines, humans have relied heavily on machines for their convenience and efficiencies. However, it is important that fully automated systems are regulated and designed to be equipped with human interaction features. Overriding safety mechanisms are still crucial whenever the user determines that the automated systems are not performing the actions that are expected of them. Barring any complacency in using them, enabling humans to perform interactions can allow users to intervene in the process to prevent any undesirable outcomes. Another purpose of this initiative is to give the user a heightened sense of trust in which the user perceives oneself to have the ownership and ultimate control of the system. This perception can ease users’ minds that they are not helpless in times of peril. As a deeper consideration, it is also important to consider whether it is necessary not to automate some features especially for safety concerns. For example, fully automated technologies used in surgical operation systems should be carefully considered. While they may provide quick and precise analytical computations and executions, the danger of exploitation can lead to irrevocable outcomes [55]. While the involvement of human interaction may be prone to human-enabled errors, the balance and control of automated features to assist humans in decision-making, rather than letting the systems have the decision-making power, may need to be carefully considered and determined. This topic should be discussed during the design phase of the guidelines for automated systems.

(2) Development and Adoption of a Global Reputation Assessment Framework, Standards, or Policies.

Transparency in the information related to consumer products and services is becoming more crucial to elevating trust levels, which affects all business entities (e.g., suppliers, manufacturers, inventors). The authors of [47] identified this demand for establishing security and trust in the relationship veins connecting all the entities together. They have proposed an approach to building a trust and reputation model through different modules (e.g., data collection, trust computation, continuous update) as an end-end evaluation. Beyond the development of the model, authorities can develop and adopt a global framework, standards, or policies to help all consumers appraise the products or services of interest. This may be executed in the form of a checklist of reputation criteria or developing different levels of certification that companies can qualify for to prove their reputations. The Cybersecurity Labelling System is one such certification launched by the Cybersecurity Agency of Singapore to help consumers discern the security level of network-connected devices [53]. Through the different levels of certifications, consumers can make informed purchase choices when they are considering the level of security required for their IoT devices. A potential effect of the initiative is that it may create mistrust towards organizations perceived to have good reputations but do not achieve a good reputation level. Also, there is the consideration on whether the assessment framework is sufficiently robust to avoid the scenario in which organizations are perceived to have a bad reputation but are assessed to have a good reputation. The effect considerations should be taken into account, and will go a long way to establish and cement the reputation of companies’ branding and respectability in their industries.

C. Initiatives Supporting Innovative Trust

Technology innovations are being produced at a much faster rate than regulations, and it has been a constant challenge for authorities to adjust or formulate guidelines to tackle them. One of the key challenges in addressing new and novel technologies is how can standards and policies keep pace with the technical advancement. There are many works that tried to offer insights into managing regulatory challenges with respect to emerging and disruptive technologies. The authors of [48] provide a good summary of those works that sought to improve the efficiency and harmonization between regulations and technologies. Some works are focused on the risks arising from specific technological areas (e.g., cryptocurrency, IoT, autonomous systems), whereas others are focused on the adjustment within the broad regulations and policy processes. However, one common theme is that most of the efforts are heavily dependent on the authorities’ capacity to brainstorm the solutions and execute them. This may be insufficient in the current pace of technological evolution.

(1) Encourage Each Organization to Provide the Public Community a Legitimate Avenue and Framework for Vulnerability Reporting.

In the formulation of policies, standards, and regulations, there is heavy dependence on the collaborations between government agencies and selected industry experts from private organizations. While such forums at the national level are necessary to spearhead regulation frameworks, more resources and knowledge can be garnered from the public community through a wider network of information gathering. One initiative is to encourage all organizations to provide a legitimate avenue and framework to allow the community to report discovered vulnerabilities. An Open Worldwide Application Security Project article mentioned that many organizations do not have clear, published disclosure policies [91], which may hinder the pace of communicating known vulnerabilities. This may constitute a need to set up a clear method and system to report vulnerabilities securely without the fear of vulnerabilities being leaked. Another consideration is to establish the boundaries and conditions of reporting (e.g., no legal action against the person who reported it, pre-communication of any rewards) so that the reporting process is clear and without dispute. Organizations with sufficient resources may conduct regular bug bounty programs to garner such vulnerability reports, but not all organizations have the necessary means to do that. This initiative allows public community members with a lack of bug bounty expertise or no expectation of payment to contribute if they discover any vulnerabilities accidentally. More importantly, a legitimate channel for reporting creates a safe space for the community to report any vulnerabilities and not be mistaken for hackers intentionally exploiting the flaws [56]. This promotes greater trust in the ecosystem, and industry experts sitting in their private and national capacities can leverage this feedback system to improve the process of formulating regulations.

5 Conclusion

In this article, we have proposed a method to consider the different trust classifications and elements via a trust pyramid. The pyramid is grouped into three classifications as a stepwise approach to examine the importance of the trust elements within each classification. The various challenges associated with each aspect are discussed to show the impacts that have contributed to the erosion of trust. Finally, several initiatives were suggested in a bid to improve the trust towards technologies and humans. Some of these initiatives may be more suitable for governmental organizations to drive them (e.g., certified ethical features, guidelines for system design, development of assessment framework), as the mandate of the governmental organizations would help to harmonize the efforts across the nation. Other initiatives (e.g., provision of safety alerts, provision of vulnerability reporting avenue) can be driven and executed primarily by businesses to help address the challenges. To the best of our knowledge, the proposed initiatives are not yet widely implemented. While the initiatives may not be able to address all trust aspects, we believe that they would be beneficial in building and regaining confidence that might have been lost due to past incidents. The implementation of the initiatives will also depend on the technological advancement, the creativity of the solutions, and the appropriateness of the environment for stakeholders to consider.
More importantly, the world is functioning at a much faster pace. The efforts from authorities and private organizations alone may not be sufficient to cope with the trust impacts. We believe it is also important to engage help from members of the public community whenever possible. Increasingly, bug bounty programs are on the rise and being utilized in parallel with internal security assessment to cope with the evolving threat landscape [14, 57]. This will provide opportunities for participation and contribution from the community, which can also elevate the level of perceived trust between organizations and members of the public. Moving forward, it will be important to engage more stakeholders and partner more closely to improve security and trust. By presenting the proposed trust pyramid to the digital society, we have offered a viewpoint of looking at root elements that influence the decision to trust. The aim is to assist stakeholders in focusing on the fundamental elements of trust, identifying the challenges relating to each of these elements, and addressing the challenges in a combined effort between the communities, businesses, and governmental organizations.
Our article has some limitations. The proposed trust elements are not exhaustive and may not be sufficient to cover the trust landscape as the world progresses. Additionally, the proposed initiatives may not have been widely implemented, and it would require further adoption and assessment of the initiatives in order to measure the real impact on trust levels. Therefore, there are some future enhancements that can be explored for this work. First, we posit that the mentioned trust aspects within each classification form the basic set of elements for consideration. There may be more aspects to consider in view of changing regulations and shifting trust perceptions. Hence, one future research direction is to validate the pyramid model to see whether the root elements are still relevant or more elements should be included. It would also be beneficial to see the preliminary efficacy of these initiatives through small-scale implementations. From there, refinements could be made to shape the overall confidence in a more optimized manner. This would help in assessing the robustness of the proposed initiatives in addressing the challenges of the elements within the pyramid model.

References

[1]
Peter Rydzynski and Brent Eskridge. 2021. Log4j: New software supply chain vulnerability unfolding as this holiday's cyber nightmare. Retrieved October 25, 2022 from https://www.ironnet.com/blog/log4j-new-software-supply-chain-vulnerability-unfolding-as-this-holidays-cyber-nightmare
[2]
Adam Bannister. 2021. Popular NPM package UA-Parser-JS poisoned with cryptomining, password-stealing malware. Retrieved October 25, 2022 from https://portswigger.net/daily-swig/popular-npm-package-ua-parser-js-poisoned-with-cryptomining-password-stealing-malware
[3]
Steve Povolny. 2020. Model Hacking ADAS to Pave Safer Roads for Autonomous Vehicles. Retrieved October 25, 2022 from https://www.mcafee.com/blogs/other-blogs/mcafee-labs/model-hacking-adas-to-pave-safer-roads-for-autonomous-vehicles/
[4]
Jennifer Kite-Powell. 2021. The Rise of Voice Cloning and DeepFakes in the Disinformation Wars. Retrieved October 25, 2022 from https://www.forbes.com/sites/jenniferhicks/2021/09/21/the-rise-of-voice-cloning-and-deep-fakes-in-the-disinformation-wars/?sh=e1f474938e14
[5]
Zurich. 2022. Three reasons to take note of the sinister rise of deepfakes. Retrieved October 25, 2022 from https://www.zurich.com/en/media/magazine/2022/three-reasons-to-take-note-of-the-sinister-rise-of-deepfakes
[6]
Edelman. 2021. Edelman Trust Barometer 2021. Retrieved October 25, 2022 from https://www.edelman.com/trust/2021-trust-barometer
[7]
Kimberly Bella. 2021. How to overcome mistrust of data. Retrieved October 25, 2022 from https://www.weforum.org/agenda/2021/12/how-to-overcome-mistrust-of-data/
[8]
Jennifer Kite-Powell. 2020. Here's How 2020 Created a Tipping Point in Trust and Digital Privacy. Retrieved October 25, 2022 from https://www.forbes.com/sites/jenniferhicks/2020/10/27/heres-how-2020-created-a-tipping-point-in-trust-and-digital-privacy/?sh=147b4e5b4fc5
[9]
Rachel Botsman. 2017. Who Can You Trust? How Technology Brought Us Together and Why It Might Drive Us Apart. PublicAffairs.
[10]
Giancarlo Fortino, Lidia Fotia, Fabrizio Messina, Domenico Rosaci, and Giuseppe M. L. Sarne. 2020. Trust and reputation in the Internet of Things: State-of-the-art and research challenges. IEEE Access 8 (2020), 60117–60125.
[11]
Hannah J. T. Lim, Kang Xin, Tieyan Li, Haiguang Wang, and Cheng-Kang Chu. 2021. On the Trust and Trust Modeling for the Future Fully-Connected Digital World: A Comprehensive Study. IEEE Access. 106743–106783.
[12]
Dave E. Marcial and Markus A. Launer. 2019. Towards the measurement of digital trust in the workplace: A proposed framework. International Journal of Scientific Engineering and Science 3, 12 (2019), 1–7.
[13]
Bobby Allyn. 2022. Deepfake Video of Zelenskyy Could Be ‘Tip of the Iceberg’ in Info War, Experts Warn. Retrieved October 26, 2022 from https://www.npr.org/2022/03/16/1087062648/deepfake-video-zelenskyy-experts-war-manipulation-ukraine-russia
[14]
HackerOne. 2021. The 2021 Hacker Report: Understanding Hacker Motivations, Development and Outlook. Retrieved October 26, 2022 from https://www.hackerone.com/resources/reporting/the-2021-hacker-report
[15]
Ari Levy. 2021. The Social Network for Doctors is Full of Vaccine Disinformation. Retrieved October 26, 2022 from https://www.cnbc.com/2021/08/06/doximity-social-network-for-doctors-full-of-antivax-disinformation.html
[16]
Olivia Carville. 2021. Airbnb Is Spending Millions of Dollars to Make Nightmares Go Away. Retrieved October 26, 2022 from https://www.bloomberg.com/news/features/2021-06-15/airbnb-spends-millions-making-nightmares-at-live-anywhere-rentals-go-away
[18]
World Economic Forum. 2020. Ethics by Design: An Organizational Approach to Responsible Use of Technology. Retrieved October 26, 2022 from https://www3.weforum.org/docs/WEF_Ethics_by_Design_2020.pdf
[19]
Tal Frankfurt. 2021. Why All Companies Must Explore the Role of Ethics in Technology. Retrieved October 26, 2022 https://www.forbes.com/sites/forbestechcouncil/2021/12/13/why-all-companies-must-explore-the-role-of-ethics-in-technology/?sh=4eb1cfb2cc91
[20]
Christina Pazzanese. 2020. Trailblazing Initiative Marries Ethics, Tech. Retrieved October 26, 2022 https://news.harvard.edu/gazette/story/2020/10/experts-consider-the-ethical-implications-of-new-technology/
[21]
James Vincent. 2016. Twitter Taught Microsoft's AI Chatbot to Be a Racist Asshole in Less than a Day. Retrieved October 26, 2022 https://www.theverge.com/2016/3/24/11297050/tay-microsoft-chatbot-racist
[22]
Peter Königs. 2022. Government surveillance, privacy, and legitimacy. Philos. Technol. 35 (2022), Article 8. https://doi.org/10.1007/s13347-022-00503-9
[23]
Scott Oliveri. 2020. Skeuomorphism: Design We Learned to Outgrow. Retrieved October 26, 2022 from https://medium.com/design-warp/skeuomorphism-design-we-learned-to-outgrow-8a24895a80d0
[24]
Sanjana Varghese. 2021. How to Build Trust in New Tech. Retrieved October 26, 2022 from https://www.raconteur.net/technology/how-to-build-trust-in-new-tech/
[25]
Sandy Clark, Stefan Frei, Matt Blaze, and Jonathan Smith. 2010. Familiarity breeds contempt: The honeymoon effect and the role of legacy code in zero-day vulnerabilities. Proceedings of the 26th Annual Computer Security Applications Conference. 251-260.
[26]
S. O'Dea. 2022. Average number devices and connections per person worldwide in 2018 and 2023. Retrieved November 9, 2022 from https://www.statista.com/statistics/1190270/number-of-devices-and-connections-per-person-worldwide/#professional/
[27]
Chris Baraniuk. 2021. Why we place too much trust in machines. Retrieved November 9, 2022 from https://www.bbc.com/future/article/20211019-why-we-place-too-much-trust-in-machines
[28]
Matthew Grissinger. Understanding Human Over-Reliance on Technology. Pharmacy and Therapeutics 44, 6.
[29]
David Lyell. 2016. Automation Can Leave Us Complacent, and That Can Have Dangerous Consequences. Retrieved November 9, 2022 from https://theconversation.com/automation-can-leave-us-complacent-and-that-can-have-dangerous-consequences-62429
[30]
Danny Yadron and Dan Tynan. 2016. Tesla Driver Dies in First Fatal Crash While Using Autopilot Mode. Retrieved November 9, 2022 from https://www.theguardian.com/technology/2016/jun/30/tesla-autopilot-death-self-driving-car-elon-musk
[31]
Shobhit Seth. 2021. Why Is Silicon Valley A Startup Heaven?. Retrieved November 22, 2022 from https://www.investopedia.com/articles/personal-finance/061115/why-silicon-valley-startup-heaven.asp
[32]
Patrick J. Hoflinger, Christian Nagel, and Philipp Sandner. 2017. Reputation for technological innovation: Does it actually cohere with innovative activity?. Journal of Innovation & Knowledge.
[33]
Martina Menfors and Felicia Fernstedt. 2015. Consumer Trust in Online Reviews – A Communication Model Perspective. Retrieved November 22, 2022 from https://www.diva-portal.org/smash/get/diva2:865316/FULLTEXT01.pdf
[34]
Jonathan Marciano. 2021. Fake Online Reviews Cost $152 Billion a Year. Here's How E-commerce Sites Can Stop Them. Retrieved November 22, 2022 from https://www.weforum.org/agenda/2021/08/fake-online-reviews-are-a-152-billion-problem-heres-how-to-silence-them/
[36]
Gartner. 2021. Top R&D Priorities for 2022. Gartner R&D Leadership Council. Retrieved November 22, 2022 from https://www.gartner.com/
[37]
PwC. 2018. The 2018 Global Innovation 1000 study. Retrieved November 22, 2022 https://www.strategyand.pwc.com/gx/en/insights/innovation1000.html#VisualTabs1
[38]
Hun Park, Jun-Hwan Park, Sujin Lee and Hyuk Hahn. 2021. A study on the impact of R&D intensity on business performance: Evidence from South Korea. Journal of Open Innovation: Technology, Market, and Complexity.
[39]
Lei Lv, Yuchen Yin and Yuanchang Wang. 2020. The Impact of R&D Input on Technological Innovation: Evidence from South Asian and Southeast Asian Countries. Hindawi: Discrete Dynamics in Nature and Society.
[40]
Cheryl The. 2022. Blackberry phones will stop working on January 4, signaling the end of an era for the iconic cellphone. Retrieved January 17, 2023 from https://www.businessinsider.com/rip-blackberry-phones-will-stop-working-on-january-4-2022-1
[41]
Parth Verma. 2020. Why Did Nokia Fail?. Retrieved January 17, 2023 from https://www.feedough.com/why-did-nokia-fail/
[42]
KMPG. 2018. The Changing Landscape of Disruptive Technologies. Retrieved January 17, 2023 from https://assets.kpmg/content/dam/kpmg/pl/pdf/2018/06/pl-The-Changing-Landscape-of-Disruptive-Technologies-2018.pdf
[43]
International Organization for Standardization. 2022. Standards By ISO/IEC JTC 1/SC 42 – Artificial Intelligence. Retrieved January 17, 2023 from https://www.iso.org/committee/6794475/x/catalogue/p/1/u/1/w/0/d/0
[44]
Benish Chaudhry, Ansar-Ul-Haque Yasar, Samar El-Amine, and Elhadi Shakshuki. 2018. Passenger Safety in Ride-Sharing Services. Procedia Computer Science.
[45]
Beena Ammanath. 2021. Thinking Through the Ethics of New Tech…Before There's a Problem. Harvard Business Review. Retrieved March 11, 2023 from https://hbr.org/2021/11/thinking-through-the-ethics-of-new-techbefore-theres-a-problem
[46]
European Commission. 2021. Ethics By Design and Ethics of Use Approaches for Artificial Intelligence. Retrieved March 11, 2023 from https://ec.europa.eu/info/funding-tenders/opportunities/docs/2021-2027/horizon/guidance/ethics-by-design-and-ethics-of-use-approaches-for-artificial-intelligence_he_en.pdf
[47]
José María Jorquera Valero, Pedro Miguel Sánchez Sánchez, Manuel Gil Pérez, Alberto Huertas Celdrán, and Gregorio Martínez Pérez. 2022. Toward pre-standardization of reputation-based trust models beyond 5G. Computer Standards & Interfaces 81 (2022).
[48]
Araz Taeihagh, M. Ramesh, and Michael Howlett. 2021. Assessing the regulatory challenges of emerging disruptive technologies. Regulation & Governance (2021).
[49]
Joe Ciancimino. 2021. Will Disruptions Make Supply Chains More Vulnerable to Attack?. Retrieved March 11, 2023 from https://www.ispartnersllc.com/blog/supply-chain-disruptions-attacks/
[50]
Rachelle Bosua, Sean B. Maynard, and Atif Ahmad. 2015. The Internet of Things (IoT) and its impact on individual privacy: An Australian perspective. Computer Law & Security Review (2015).
[51]
National Security Commission on Artificial Intelligence. 2019. Interim Report. Retrieved February 3, 2024 from https://epic.org/wp-content/uploads/foia/epic-v-ai-commission/AI-Commission-Interim-Report-Nov-2019.pdf
[52]
[53]
Cybersecurity Agency of Singapore. 2021. Cybersecurity Labelling Scheme. Retrieved March 11, 2023 from https://www.csa.gov.sg/our-programmes/certification-and-labelling-schemes/cybersecurity-labelling-scheme
[54]
Tristan Greene. 2021. Researchers Fooled AI into Ignoring Stop Signs Using a Cheap Projector. Retrieved June 6, 2023 from https://thenextweb.com/news/researchers-tricked-ai-ignoring-stop-signs-using-cheap-projector
[55]
Michael T. Klare. 2020. The Pentagon's Next Project: Automated War. Retrieved June 6, 2023 from https://www.thenation.com/article/world/trump-pentagon-jadc2/
[56]
Philip Bump. 2021. A Newspaper Informed Missouri about a Website Flaw. The Governor Accused It of ‘Hacking’. Retrieved June 6, 2023 from https://www.washingtonpost.com/politics/2021/10/14/newspaper-informed-missouri-about-website-flaw-governor-accused-it-hacking/
[57]
Greg Noone. 2021. The Rise and Rise of Bug Bounty Hunting. Retrieved June 6, 2023 from https://techmonitor.ai/technology/cybersecurity/rise-and-rise-of-bug-bounty-hunting
[58]
Kendra Cherry. 2022. Maslow's Hierarchy of Needs. Retrieved June 8, 2023 from https://www.verywellmind.com/what-is-maslows-hierarchy-of-needs-4136760
[59]
Louis Rosenberg. 2023. Why Generative AI Is More Dangerous Than You Think. Retrieved June 8, 2023 from https://venturebeat.com/ai/why-generative-ai-is-more-dangerous-than-you-think/
[60]
Upul Jayasinghe, Gyu Myoung Lee, Tai-Won Um, and Qi Shi. 2018. Machine learning based trust computational model for IoT services. IEEE Transactions on Sustainable Computing. DOI:
[61]
Grab. 2023. Grab Malaysia Introduces New Safety Innovation – Setting Standards for Preventable Incidents. Retrieved June 8, 2023 from https://www.grab.com/my/press/others/grab-new-safety-innovation/
[62]
Julian B. Rotter. 1967. A new scale for the measurement of interpersonal trust. Journal of Personality 35 (1967), 651–665.
[63]
Roy Lewicki, Robert J. Bies, and Daniel J. McAllister. 1998. Trust and distrust: New relationships and realities. Academy of Management Review (1998). DOI:
[64]
Roger C. Mayer, James H. Davis, and F. David Schoorman. 1995. An integrative model of organizational trust. Academy of Management Review 20, 709–734.
[65]
Piotr Sztompka. 1999. Trust: A Sociological Theory. Cambridge University Press. Retrieved November 9, 2023 from http://ndl.ethernet.edu.et/bitstream/123456789/17643/1/28.pdf
[66]
Denise M. Rousseau, Sim B. Sitkin, Ronald S. Burt, and Colin Camerer. 1988. Not so different after all: A cross-discipline view of trust. Academy of Management Review 23 (1988), 393–404.
[67]
Fuan Li and Stephen C. Betts. 2003. Trust: What it is and what it is not. International Business & Economics Research Journal 2.
[68]
Roger C. Mayer, James H. Davis, and F. David Schoorman. 2007. An integrative model of organizational trust: Past, present and future. Academy of Management Review 32, 344–354.
[69]
World Economic Forum. 2023. World Economic Forum: Digital Trust. Retrieved November 9, 2023 from https://initiatives.weforum.org/digital-trust/about
[70]
ISACA. 2023. The Digital Trust Paradox: Despite Hailing Its Importance, For Most It's Not a Priority. Retrieved November 9, 2023 from https://www.isaca.org/about-us/newsroom/press-releases/2023/the-digital-trust-paradox
[71]
Dmitry E. Kozhevnikov and Anton S. Korolev. 2018. Digital trust as a basis for the digital transformation of the enterprise and economy. In 2018 11th International Conference “Management of Large-Scale System Development”.
[72]
Raja Naeem Akram and Ryan K. L. Ko. 2014. Digital trust – trusted computing and beyond: A position paper. In 2014 IEEE 13thInternational Conference on Trust, Security and Privacy in Computing and Communications.
[73]
Matin Ammozadeh, David Daniels, Daye Nam, Stella Chen, Michael Hilton, Sruti Srinivasa Ragavan, and Mohammad Amin Alipour. 2023. Trust in Generative AI among Students: An Exploratory Study. Retrieved November 9, 2023 from https://arxiv.org/abs/2310.04631
[74]
Abiodun A. Solanke. 2022. Explainable digital forensics AI: Towards mitigating distrust in AI-based digital forensics analysis using interpretable models. Forensic Science International: Digital Investigation 42 (2022).
[75]
Ziwei Ji, Nayeon Lee, Rita Frieske, Tiezheng Yu, Dan Su, Yan Xu, Etsuko Ishii, Ye Jin Bang, Andrea Madotto, and Pascale Fung. 2023. Survey of hallucination in natural language generation. ACM Computing Surveys 55 (2023), 1–38.
[76]
Somya Abdulkarim Alhandi, Hazalila Kamaludin, and Nayef Abdulwahab Mohammed Alduais. 2023. Trust evaluation model in IoT environment: A comprehensive survey. IEEE Access 11 (2023). 11165–11182.
[77]
Warsun Najib, Selo Sulistyo, and Widyawan. 2019. Survey on trust calculation methods in Internet of Things. Procedia Computer Science 161 (2019), 1300–1307.
[78]
Asmita Manna, Anirban Sengupta, and Chandan Mazumdar. 2016. A survey of trust models for enterprise information systems. Procedia Computer Science 85 (2016), 527–534.
[79]
Wanita Sherchan, Surya Nepal, and Cecile Paris. 2013. A survey of trust in social networks. ACM Computing Survey 45 (2013). 1–33.
[80]
Han Ei Chen, Jeanne Tan, and Carol Soon. 2023. Digital Trust and Why It Matters. Retrieved November 10, 2023 from. https://ctic.nus.edu.sg/resources/CTIC-WP-05(2023).pdf
[81]
Paula Goldman. 2023. How Intentional Innovation Can Build Trust in Tech. Retrieved November 16, 2023 from https://techcrunch.com/sponsor/salesforce/how-intentional-innovation-can-build-trust-in-tech/
[82]
Nancy Albinson, Sam Balaji, and Yang Chu. 2019. Building Digital Trust: Technology Can Lead the Way. Deloitte Insights. Retrieved November 16, 2023 from https://www2.deloitte.com/content/dam/insights/us/articles/6320_Building-digital-trust/DI_Building-digital-trust.pdf
[83]
Ryan Shandler and Miguel Alberto Gomez. 2023. The hidden threat of cyber-attacks – undermining public confidence in government. Journal of Information Technology & Politics 20 (2023), 359–374.
[84]
Tilly Kenyon. 2021. What Causes the Most Damage, Losing Data or Trust? Retrieved November 16, 2023 from https://cybermagazine.com/cyber-security/what-causes-most-damage-losing-data-or-trust
[85]
Miguel Alberto Gomez and Ryan Shandler. 2021. Cyber Conflict and the Erosion of Trust. Retrieved November 16, 2023 from https://www.cfr.org/blog/cyber-conflict-and-erosion-trust
[86]
Neal A. Pollard, Adam Segal, and Matthew G. Devost. 2018. Trust War: Dangerous Trends in Cyber Conflict. Retrieved November 16, 2023 from https://warontherocks.com/2018/01/trust-war-dangerous-trends-cyber-conflict/
[87]
Ioannis Agrafiotis, Jason R. C. Nurse, Michael Goldsmith, Sadie Creese, and David Upton. 2018. A taxonomy of cyber-harms: Defining the impacts of cyber-attacks and understanding how they propagate. Journal of Cybersecurity (2018). DOI:
[88]
Oliver J. Mason, Caroline Stevenson, and Fleur Freedman. 2014. Ever-present threats from information technology: The Cyber-Paranoia and Fear Scale. Frontiers in Psychology 5 (2014).
[89]
Maria Bada and Jason R. C. Nurse. 2020. The social and psychological impact of cyber-attacks. Emerging Cyber Threats and Cognitive Vulnerabilities Academic Press. 73--92. https://doi.org/10.1016/B978-0-12-816203-3.00004-6
[90]
Chris Stokel-Walker. 2024. Europe's New AI Rules Could Go Global – Here's What That Will Mean. Retrieved February 3, 2024 from https://www.scientificamerican.com/article/europes-new-ai-rules-could-go-global-heres-what-that-will-mean/
[91]
OWASP. Vulnerability Disclosure Cheat Sheet. Retrieved February 3, 2024 from https://cheatsheetseries.owasp.org/cheatsheets/Vulnerability_Disclosure_Cheat_Sheet.html

Cited By

View all

Recommendations

Comments

Information & Contributors

Information

Published In

cover image Digital Government: Research and Practice
Digital Government: Research and Practice  Volume 5, Issue 2
June 2024
91 pages
EISSN:2639-0175
DOI:10.1145/3613590
Issue’s Table of Contents

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 19 June 2024
Online AM: 19 March 2024
Accepted: 03 March 2024
Revised: 15 February 2024
Received: 20 June 2023
Published in DGOV Volume 5, Issue 2

Check for updates

Author Tags

  1. Trust
  2. trust modeling
  3. trust Framework

Qualifiers

  • Survey

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 689
    Total Downloads
  • Downloads (Last 12 months)689
  • Downloads (Last 6 weeks)186
Reflects downloads up to 01 Nov 2024

Other Metrics

Citations

Cited By

View all

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Get Access

Login options

Full Access

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media