1 Introduction
The Internet-of-Everything (IoE) era has taken center stage in the merging of the virtual and physical worlds in which people, processes, data, and things are closely connected. These connections are not just restricted to computers or mobile devices as we usually know them; they also extend to industrial machinery, vehicles, and other systems with networking capabilities. As the world becomes more dependent on technology, the opportunities for exploitation also increase. This can affect the level of digital trust negatively as a whole.
The adverse impacts of a digitalized world can be seen in several areas. Attacks on supply chain industries have caused widespread disruptions to operations and unavailability to resources [
49]. Vulnerabilities affecting software supply chains have compromised software packages and applications, and attacks were able to exfiltrate sensitive information [
1,
2], among other impacts. In the area of
artificial intelligence (AI), an investigation conducted by McAfee has found that AI systems in autonomous vehicles could be hacked [
3]. The algorithms that define the parameters of the vehicle systems were manipulated and caused misinterpretation of the information. This could lead to safety hazards and deadly incidents. On the impacts of digital content and privacy, fake content is becoming prominent. It is becoming increasingly difficult to distinguish between legitimate and nefarious content. From political agendas to monetary frauds, fake content poses a great risk due to uncertainties caused by the lack of data authenticity [
4,
5].
The impacts on trust can also be felt from these malicious attacks as shown in Figure
1. First, it reduces the confidence in people to trust government entities, service providers, and suppliers to protect and secure sensitive data [
6,
83,
84,
87]. Second, it undermines the trust in network-enabled devices and systems [
85–
87] as users may be concerned with potential loss of information or safety issues. Last, it creates potential apprehension and fear towards the usage of technologies as people become more cautious and wary of these dangers [
88,
89]. These impacts affect trust in an adverse manner and can hinder the progression of technology. In the Edelman Trust Barometer report [
6], a decline in trust levels was reported across many technological subsectors (e.g., Internet-of-Things, 5G, AI) in 25 of the 27 countries that participated in the survey. In addition to the technological front, the trust in government bodies has diminished. This was further underscored by the effects of undesirable cyber activities.
As trust remains an important foothold in all interactions and transactions, it is important to understand a key feature of it. Trust is easy to lose but difficult to build. It can take many years to build a good reputation and even more effort to regain the trust if it is lost. This can have long-lasting effects to the reputation of organizations and confidence of the public. Against this backdrop of eroding trust, the importance of improving trust levels becomes more critical in the world today. In our work, we aim to propose practical initiatives that stakeholders can perform to curb the deteriorating confidence that has influenced the ecosystem. In Section
2, we will discuss the existing works that strive to tackle trust issues in different manners. Section
3 comprises our proposed trust pyramid in which we categorized the different trust elements and highlighted some challenges. Our proposed initiatives to increase trust are discussed in Section
4. We conclude the paper in Section
5.
2 Existing Literature and Works
There are numerous and varying definitions of trust in existing literature, many of which stem from the social-psychological viewpoint towards trust in other individuals or groups of people. The author of [
62] defined trust as the “
expectancy held by an individual or a group that the word, promise, verbal or written statement of another individual or group can be relied upon”. The authors in [
63] defined trust as the “
confident positive expectations regarding another's conduct”. The definitions in [
62,
63] focus on the expectations of the trustee to be able to perform the right action for the trustor. The authors of [
64] defined trust on the basis of the “
willingness to be vulnerable” as a key aspect. This creates an additional lens to consider for the definition of trust, which is the degree of risk that one is willing to accept and be vulnerable before trusting another individual or group. The author of [
65] provided a definition of trust in which “
trust is a bet about the future contingent actions of others”. Similar to the definition used in [
64], it connotates the behavior and mindset of risk taking. The authors of [
66] opined that “
Trust is a psychological state comprising the intention to accept vulnerability based upon positive expectations of the intentions of behavior of another”, which encompasses the essence of the trust definitions highlighted in [
62–
65].
It is not easy to capture all aspects of trust within a short definition. As described in [
66,
67], it is extremely challenging to achieve a common consensus on a trust definition despite multiple attempts by researchers. As such, the authors of [
67] chose to elaborate on the different facets of trust. They agreed that trust is a psychological state that has correlations to the expectations of another and the willingness to be vulnerable. Next, they mentioned that trust is relational in that the relationship involves the interaction with the trustee. If the ‘expectations of others’ and the ‘willingness to be vulnerable’ are considered and inclined towards self-presented concerns, the relational aspect dwells in the effect on how the interaction with the other party can affect the trust level of the trustor (e.g., exacerbate existing concerns, improve confidence). Last, the authors stated that trust is a choice, which highlights the final course of action of deciding to trust or not to trust. The authors of [
66] also described a similar facet to relational trust but termed it
interdependence. It is a necessary condition whereby the interest of a person or a group can only be achieved upon the reliance of another. This signifies that a trust relationship must involve two parties minimally. The authors of [
68] expanded on their own work [
64] to include new dimensions of trust to address the evolution of perspectives in trust. Some of these dimensions include context-specific scenarios (e.g., supervisor may be perceived to have greater authority and thus willing to take greater risks in choices as compared with a subordinate), cultural implications (e.g., task-oriented culture and relationship-oriented culture can have differing initial trust before executing a task), and emotions (e.g., may cause a trustor to take unwarranted risks).
With the proliferation of technologies and digital applications, the domain of trust has expanded beyond the realm of humans. In addition to the traditional definitions of interpersonal trust, researchers and organizations are building upon the definitions to address digital trust as well. The World Economic Forum defines digital trust as “
individuals’ expectation that digital technologies and services – and the organizations providing them – will protect all stakeholders’ interests and uphold societal expectations and values” [
69]. ISACA, a global professional membership organization, defines digital trust as “
the confidence in the integrity of the relationship, interactions and transactions among providers and consumers within an associated digital ecosystem” [
70]. The authors of [
71] refers to digital trust as the “
relationship between a person and an autonomous intellectual agent that exists in a digital environment". In [
72], digital trust is defined as “
a trust based either on past experience or evidence that an entity has behaved and/or will behave in accordance with the self-stated behavior”. Through the development of trust relationships over time, we have seen how the definition of trust has evolved from interpersonal trust between humans and expanding to include trust towards technologies, services, organizations, and businesses as well.
One important essence of trust is that that the trustees are expected to perform certain behaviors. When these behaviors deviate, it can create mistrust. Mistrust can cause discomfort when users are not assured that the engaged services or acquired products can provide the perceived benefits. In a study conducted by Visa [
7], consumers are becoming more distrusting on the way their data is managed. As such, there is a higher desire from individuals to have greater ownership and empowerment to control the usage of data. In another study [
8], the US House of Representatives Judiciary Committee found out that the distrust in big tech companies is increasing. One of the main concerns raised was the problem of advancement in technology at the expense of user privacy. In the area of generative AI technologies, there are research works that discuss the issue of transparency of AI models, the lack of explanation of the models’ outputs, and the challenges of hallucination [
73–
75]. Without understanding how the AI model processes information and how it arrives at a generated output, users cannot be assured that the model is performing as intended for the right reason and is producing the right output. If humans, organizations, and businesses do not use technologies as intended or that the technologies and applications do not perform as expected, it can lead to mistrust and cause serious consequences in certain scenarios (e.g., the criminal justice system [
73]).
There are many existing works that address the topic of trust and are aimed at shaping the level of trust for the better. They provide extensive trust knowledge and facets that are important for all entities to consider towards elevating the confidence in all trust relationships. For example, Rachel Botsman, a leading expert on trust topics, discussed key insights in her book on how technologies can shape the trust culture in the modern world [
9]. She highlighted key principles on how to encourage humans to trust new ideas and platforms that are practiced by some businesses today. She discussed the limit to the development of technologies that may be beneficial towards sustaining trust levels (e.g., how far/which technologies should be fully automated). She also discussed the importance of shared responsibilities between stakeholders and how business practices, government policies and user behaviors all play important roles in fostering the trust environment. In [
59], the study provided some insights related to social trust in the COVID-19 situation. The authors found that strong government support is an important factor in enhancing trust levels between the members of society and strengthening response measures. The form of support can be proactively supplying information and products to combat the virus, delivering encouraging messages to adopt necessary measures or collaborating with members to minimize the risk of viral infection. In [
10], the authors proposed a comparison of different trust modeling architectures in the
Internet-of-Things (IoT) environment. The trust architectures serve as platforms to contain and compute trust values of IoT environments, measuring their behaviors, reputations, and accuracies. The author further discussed the advantages and limitations of the architectures. For example, a distributed architecture is divided into 3 layers (Things, Fog, and Cloud layers) and each layer has its own computational capacity and security considerations (e.g., limited computation resources in Things). Centralized architectures integrate all the layers but may introduce a single point of failure. Through these comparisons, the author hoped to advance the development of measuring trust in IoT environments and ecosystems.
Some works proposed methods and frameworks to evaluate the degree of trust in varying scenarios and environments. The authors of [
11] proposed a set of factors and broad checklist for users to evaluate humans’ trust in different types of relationships. Some examples include the trust from people to people and the trust from things to people. The authors used these sets of relationships to formulate the different parameters in an evaluation rubric. The rubric consists of five aspects (i.e., Security, Comprehensiveness, Usability, Functionality, Robustness). In Security, the requirements focus on defending against threats and attacks that may compromise the trust. In Comprehensiveness, the requirements focus on an adaptive model that can scale with evolving technological trends, can be contextually suitable, and can accurately depict the level of trust. Usability considers the computational resources, usability of data, and the applicability of trust models in different networks. In Functionality, the requirements focus on a decision-making framework on how trust values can affect the level of access. Last, Robustness focuses on requirements regarding the availability of management systems during network disturbances. Figure
2 shows a table of the rubrics that were formulated. In another work [
12], the authors proposed a conceptual trust framework to quantify the measurement of digital trust in the workplace. This is done through the mapping of the confidence level in three aspects (i.e., people, technology, and process) and the identification of the drivers impacting these aspects. A descriptive assessment report is produced at the end to demonstrate the correlation between the drivers and the confidence level. The framework aims to evaluate the level of digital trust and serves as a platform to facilitate further studies in the generation of an assessment tool for quantifying digital trust. In [
60], the authors proposed a trust computational model to predict the trustworthiness of IoT services. Aided by a novel machine learning algorithm, the model extracts a variety of features as metrics to evaluate the trustworthiness. Examples of such features include the co-location relationship, collaboration frequency and duration of the interactions, and the feedback model to assess the historical experience between the interactions.
There are also other survey works that compare various trust models applicable to IoT environments, enterprise information systems, and social networks. The authors of [
76,
77] discussed trust models for IoT paradigms and offered some insights on common characteristics of trust and the types of classification that are used in them. Some of the common characteristics are that trust is context dependent (i.e., only relevant information is processed), asymmetric (i.e., trust does not apply in both directions between two entities), and not perfect (i.e., no 100% types of trust). Examples of classification include methods of measuring trust, types of trust to be measured, and the source of trust interactions. In [
78], the authors identified three categories to classify trust (i.e., credential-based, reputation-based, and hybrid). Credentials may refer to testimonials or certifications to demonstrate the qualification of service while reputation comprises cumulative knowledge of past behavior and performance belonging to the service or product providers. The authors of [
79] discussed three aspects of social trust: information collection, evaluation, and dissemination. These three categories are highly relevant in social networks in terms of the type of information collected, the types of techniques that are used to evaluate trust levels, and the methods to disseminate information. Figure
3 shows the classification architecture used in [
79].
These works play a crucial role in helping researchers to consider and improve the design of trust models that can address as many aspects as possible. They provide excellent insights to shape the trust cultures in our world and provide continuous improvement in the design of trust models to meet the demands in the evolving landscape. We believe that there are gaps that our article can fill further. First, many of the works have a key focus on interpersonal trust [
63,
66,
67] or digital trust [
73–
78], whereas some works discuss both aspects [
65,
79]. Some of these works are also focused on certain environments such as AI [
73–
75], IoT [
76,
77], and social networks [
79]. It is important to embrace both traditional and newer definitions of trust to capture the vastness of trust considerations. Second, most works focus on the properties of trust relationships and the different classifications of measuring trust [
11,
12,
60,
66–
68,
74,
76–
79,
87] but not on the root causality factors that influence the decision to trust. We aim to provide another viewpoint on improving trust by looking at key root elements that affect this causality to influence the decision. We aim to propose a trust model that is agnostic to the coverage bounded by the various trust definitions, any specific types of interactions between humans and/or technologies, and applicable to any type of environment. We also aim to propose initiatives that we feel may be beneficial in understanding some possible courses of action to address the challenges in trust.
5 Conclusion
In this article, we have proposed a method to consider the different trust classifications and elements via a trust pyramid. The pyramid is grouped into three classifications as a stepwise approach to examine the importance of the trust elements within each classification. The various challenges associated with each aspect are discussed to show the impacts that have contributed to the erosion of trust. Finally, several initiatives were suggested in a bid to improve the trust towards technologies and humans. Some of these initiatives may be more suitable for governmental organizations to drive them (e.g., certified ethical features, guidelines for system design, development of assessment framework), as the mandate of the governmental organizations would help to harmonize the efforts across the nation. Other initiatives (e.g., provision of safety alerts, provision of vulnerability reporting avenue) can be driven and executed primarily by businesses to help address the challenges. To the best of our knowledge, the proposed initiatives are not yet widely implemented. While the initiatives may not be able to address all trust aspects, we believe that they would be beneficial in building and regaining confidence that might have been lost due to past incidents. The implementation of the initiatives will also depend on the technological advancement, the creativity of the solutions, and the appropriateness of the environment for stakeholders to consider.
More importantly, the world is functioning at a much faster pace. The efforts from authorities and private organizations alone may not be sufficient to cope with the trust impacts. We believe it is also important to engage help from members of the public community whenever possible. Increasingly, bug bounty programs are on the rise and being utilized in parallel with internal security assessment to cope with the evolving threat landscape [
14,
57]. This will provide opportunities for participation and contribution from the community, which can also elevate the level of perceived trust between organizations and members of the public. Moving forward, it will be important to engage more stakeholders and partner more closely to improve security and trust. By presenting the proposed trust pyramid to the digital society, we have offered a viewpoint of looking at root elements that influence the decision to trust. The aim is to assist stakeholders in focusing on the fundamental elements of trust, identifying the challenges relating to each of these elements, and addressing the challenges in a combined effort between the communities, businesses, and governmental organizations.
Our article has some limitations. The proposed trust elements are not exhaustive and may not be sufficient to cover the trust landscape as the world progresses. Additionally, the proposed initiatives may not have been widely implemented, and it would require further adoption and assessment of the initiatives in order to measure the real impact on trust levels. Therefore, there are some future enhancements that can be explored for this work. First, we posit that the mentioned trust aspects within each classification form the basic set of elements for consideration. There may be more aspects to consider in view of changing regulations and shifting trust perceptions. Hence, one future research direction is to validate the pyramid model to see whether the root elements are still relevant or more elements should be included. It would also be beneficial to see the preliminary efficacy of these initiatives through small-scale implementations. From there, refinements could be made to shape the overall confidence in a more optimized manner. This would help in assessing the robustness of the proposed initiatives in addressing the challenges of the elements within the pyramid model.