Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Hype Cycle For Emerging Technologies 2020

Download as pdf or txt
Download as pdf or txt
You are on page 1of 65

Hype Cycle for Emerging Technologies, 2020

Published: 24 July 2020 ID: G00450415

Analyst(s): Brian Burke, Marty Resnick, Arnold Gao

Our 2020 Hype Cycle highlights emerging technologies that will significantly
affect business, society and people over the next five to 10 years. It includes
technologies that enable a composable enterprise, aspire to regain society’s
trust in technology and alter the state of your brain.

Table of Contents

Analysis.................................................................................................................................................. 3
What You Need to Know.................................................................................................................. 3
The Hype Cycle................................................................................................................................ 3
Trends in Emerging Technologies................................................................................................4
The Priority Matrix.............................................................................................................................7
Off the Hype Cycle........................................................................................................................... 9
On the Rise.................................................................................................................................... 11
Authenticated Provenance........................................................................................................11
AI-Augmented Design...............................................................................................................12
DNA Computing and Storage................................................................................................... 14
Low-Cost Single-Board Computers at the Edge....................................................................... 15
Self-Supervised Learning.......................................................................................................... 16
Health Passport........................................................................................................................18
Bidirectional Brain-Machine Interface........................................................................................ 19
Generative Adversarial Networks.............................................................................................. 21
Biodegradable Sensors............................................................................................................ 23
Differential Privacy.....................................................................................................................25
Private 5G................................................................................................................................ 26
Small Data................................................................................................................................ 28
Adaptive ML............................................................................................................................. 30
Composite AI............................................................................................................................31
Generative AI............................................................................................................................ 33
Packaged Business Capabilities............................................................................................... 35
Citizen Twin.............................................................................................................................. 37
Digital Twin of the Person..........................................................................................................39
Multiexperience........................................................................................................................ 40
Responsible AI..........................................................................................................................42
AI-Augmented Development.....................................................................................................44
Composable Enterprise............................................................................................................ 46
At the Peak.....................................................................................................................................47
Data Fabric...............................................................................................................................47
Embedded AI........................................................................................................................... 50
Secure Access Service Edge (SASE)........................................................................................ 51
Social Distancing Technologies................................................................................................. 53
Explainable AI........................................................................................................................... 55
Sliding Into the Trough.................................................................................................................... 56
Carbon-Based Transistors........................................................................................................ 56
Bring Your Own Identity............................................................................................................ 58
Ontologies and Graphs.............................................................................................................60
Appendixes.................................................................................................................................... 62
Hype Cycle Phases, Benefit Ratings and Maturity Levels.......................................................... 63
Gartner Recommended Reading.......................................................................................................... 64

List of Tables

Table 1. Hype Cycle Phases................................................................................................................. 63


Table 2. Benefit Ratings........................................................................................................................ 63
Table 3. Maturity Levels........................................................................................................................ 64

List of Figures

Figure 1. Hype Cycle for Emerging Technologies, 2020.......................................................................... 7


Figure 2. Priority Matrix for Emerging Technologies, 2020....................................................................... 9
Figure 3. Hype Cycle for Emerging Technologies, 2019........................................................................ 62

Page 2 of 65 Gartner, Inc. | G00450415


Analysis
What You Need to Know
As a technology innovation leader, CTO or CIO, you must stay up to date with emerging
technologies to determine their impact on your industry and the opportunities they present for your
organization. This year brings exciting opportunities for you to explore in your search for
technology-enabled business transformation. If you’re an early adopter, you can use this Hype
Cycle as a starting point to:

■ Understand the technologies you need to watch during the five- to 10-year planning horizon.
■ Explore potential opportunities.
■ Plan to exploit these technologies as they become commercially viable.

Technology innovation has become the key to competitive differentiation and is the catalyst for
transforming many industries. Breakthrough technologies are continually appearing, challenging
even the most innovative to keep up. Your focus on digital business transformation means you must
cut through the hype surrounding these technologies. The innovation profiles (IPs) highlighted in this
research provide guidance on the business impact of emerging technologies and recommendations
for how to use them to drive competitive differentiation.

This year, the emerging technologies on our Hype Cycle fall into five clear trends:

■ Composite architectures
■ Algorithmic trust
■ Beyond silicon
■ Formative artificial intelligence (AI)
■ Digital me

The Hype Cycle


The Hype Cycle for Emerging Technologies is unique among Gartner Hype Cycles because it distils
insights from more than 1,700 technologies that Gartner profiles into a succinct set of “must know”
emerging technologies and trends. The technologies on this Hype Cycle are selected for the
transformational or high benefits, and the breadth of impact across business and society. Because
of its focus on emerging technologies, this Hype Cycle features only those trends on the first half of
the cycle. It tends to introduce technologies that haven’t featured in previous iterations. Limited
space means that we have had to retire most of the technologies that we highlighted in the 2019
version of this research. The retired technologies remain important and are included in other Hype
Cycle research (see the Off the Hype Cycle section).

Gartner, Inc. | G00450415 Page 3 of 65


Trends in Emerging Technologies
This iteration of the Hype Cycle highlights five distinct trends that create highly adaptive solutions,
explore the future of AI and rebuild trust in technology and society. You should track these five
emerging technology trends.

Composite architectures. Rapid business change and decentralization are driving the need for
organizational agility and custom user experiences. The composable enterprise is designed to
respond to rapidly changing business needs with packaged business capabilities built on a flexible
data fabric. A composite architecture is implemented with solutions composed of packaged
business capabilities. Built-in intelligence is decentralized and extends outward to edge devices and
the end user.

To make your organization more agile, examine the following technologies:

■ Composable enterprise
■ Packaged business capabilities
■ Data fabric
■ Private 5G
■ Embedded AI
■ Low-cost single-board computers at the edge

Algorithmic trust. In recent years, organizations have exposed personal data, used biased AI
models and flooded the internet with fake news and videos, to name just a few issues. In response,
a new trust architecture is evolving, shifting from trusting organizations to trusting algorithms.
Algorithmic trust models are replacing trust models based on responsible authorities. This is to
ensure the privacy and security of data, the provenance of assets, and the identity of individuals and
things. Algorithmic trust helps to ensure that organizations will not be exposed to the risks and
costs of losing the trust of their customers, employees and partners.

To start rebuilding trust with your customers, employees and partners, examine the following
technologies:

■ Secure access service edge (SASE)


■ Differential privacy
■ Authenticated provenance
■ Bring your own identity (BYOI)
■ Responsible AI
■ Explainable AI

BYOI holds an unusual position in the Hype Cycle as its maturity is early mainstream, but it hasn’t
yet reached the bottom of the Hype Cycle. The reason for this is that several implementation

Page 4 of 65 Gartner, Inc. | G00450415


models exist for BYOI. These range from long-established social identities (for example, Facebook
and LinkedIn) to less mature implementations (such as bank identities) to emerging decentralized
(blockchain) implementation models. The positioning/maturity is a trade-off that reflects the various
implementation models.

Beyond silicon. Gordon Moore famously predicted that the number of transistors in a dense
integrated circuit would double approximately every two years. For more than 40 years, Moore’s
Law has guided the IT industry. As technology approaches the physical limits of silicon, new
advanced materials are creating breakthrough opportunities to make technologies faster and
smaller.

Explore the following critical technologies:

■ DNA computing and storage


■ Biodegradable sensors
■ Carbon-based transistors

Formative AI. This refers to a set of emerging AI and related technologies that can dynamically
change to respond to situational variances, hence the term “formative.” Some of these technologies
enable application developers and user experience designers to create solutions using AI-enabled
tools. Other technologies enable the development of AI models that can evolve dynamically to
adapt over time. The most advanced of these technologies can generate novel models to solve
specific problems.

To explore the boundaries of AI, analyze the following technologies:

■ AI-augmented design
■ AI-augmented development
■ Ontologies and graphs
■ Small data
■ Composite AI
■ Adaptive machine learning (ML)
■ Self-supervised learning
■ Generative AI
■ Generative adversarial networks

Digital me. Technology is becoming more integrated with people to create opportunities for digital
representations of ourselves. The COVID-19 pandemic has spawned health passports and social
distancing technologies designed to keep people safe. Digital twins of humans provide models of
individuals that can represent people in both the physical and digital space. The way we interact
with the digital world is also changing, moving beyond the use of screens and keyboards to a

Gartner, Inc. | G00450415 Page 5 of 65


combination of interaction modalities (e.g., voice, vision and gesture), and even directly altering our
brains.

Track the following technologies:

■ Social distancing technologies (also known as contact-tracing apps)


■ Health passport
■ Digital twin of the person
■ Citizen twin
■ Multiexperience
■ Two-way BMI (brain machine interface)

Two digital-me technologies are moving particularly quickly through the Hype Cycle: health
passports and social distancing technologies. Both of these technologies are related to the
COVID-19 pandemic, which partly explains their accelerated progression.

Technologies rarely enter the Hype Cycle at the point at which social distancing technologies has
entered it, but this technology has received extraordinary attention in the media, mainly because of
privacy concerns. Health passport is also unusual because technologies rarely enter the Hype Cycle
with a market penetration of 5% to 20% of the target audience. However, this technology is required
for access to public spaces and transport in China (the Health Code app) and India (the Aarogya
Setu digital service), and hundreds of millions of people in those countries are using it. We expect
that both technologies will reach the final stage of the Hype Cycle in less than two years.

Page 6 of 65 Gartner, Inc. | G00450415


Figure 1. Hype Cycle for Emerging Technologies, 2020

The Priority Matrix


The Priority Matrix maps the benefit rating for each innovation against the amount of time each
innovation requires to achieve mainstream adoption. The benefit rating provides an indicator of the
potential of the innovation, but the rating may not apply to all industries and organizations. So
identify which of the innovations offer significant potential benefits to your organization based on
your own use cases. Then use this information to guide investment decisions. Examine innovations
that offer more significant, near-term benefits because they can offer both strategic and tactical
benefits. Explore innovations with longer-term benefits if they offer strategic value. We recommend
tracking technologies that are important to your organization by creating a technology radar (see
“Toolkit: How to Build an Emerging Technology Radar”). Alternatively, use our Hype Cycle tool to
create a customized Hype Cycle for your organization (see “Create Your Own Hype Cycle With
Gartner’s Hype Cycle Builder”).

Emerging technologies are disruptive by nature, but the competitive advantage they provide isn’t yet
well known or proven. Most will take more than five years, and some more than 10 years, to reach

Gartner, Inc. | G00450415 Page 7 of 65


the Plateau of Productivity. But some technologies on the Hype Cycle will mature in the near term,
so you must understand the opportunities these present, particularly those with the potential for
transformational or high impact.

Most technologies have multiple use cases. To determine whether a technology will have a
significant impact on your industry and organization, explore each of the use cases. Prioritize those
with the greatest potential benefit and prepare to launch a proof-of-concept project to demonstrate
the feasibility of a technology for a specific use case. When a technology can perform in a particular
use case with reasonable quality, examine the other obstacles to deployment to determine the
appropriate deployment planning horizon. Obstacles may include cost, regulation, social
acceptance and nonfunctional requirements.

Page 8 of 65 Gartner, Inc. | G00450415


Figure 2. Priority Matrix for Emerging Technologies, 2020

Off the Hype Cycle


The Hype Cycle for Emerging Technologies is not a typical Gartner Hype Cycle. It draws from an
extremely broad spectrum of topics and we intend it to be dynamic. It features many technologies
for only a year or two, after which it doesn’t track them to make room for other important
technologies. Most technologies that we remove from this Hype Cycle continue to be tracked on

Gartner, Inc. | G00450415 Page 9 of 65


other Hype Cycles. Refer to Gartner’s broader collection of Hype Cycles for items of ongoing
interest.

We’ve removed many of the technologies that appeared in the 2019 version of this Hype Cycle,
including:

■ 3D sensing cameras — Two Hype Cycles still track this technology, including “Hype Cycle for
Sensing Technologies and Applications, 2020.”
■ 5G — Six Hype Cycles still track this technology, including “Hype Cycle for Unified
Communications and Collaboration, 2020.”
■ AI cloud services — Three Hype Cycles still track this technology, including “Hype Cycle for
Artificial Intelligence, 2020.”
■ AR cloud — Two Hype Cycles still track this technology, including “Hype Cycle for Edge
Computing, 2020.”
■ Augmented intelligence — Three Hype Cycles still track this technology, including “Hype Cycle
for Artificial Intelligence, 2020.”
■ Autonomous driving Level 4 — “Hype Cycle for Automotive Electronics, 2020” still tracks this
technology.
■ Autonomous driving Level 5 — “Hype Cycle for Automotive Electronics, 2020” still tracks this
technology.
■ Biochips — “Hype Cycle for Sensing Technologies and Applications, 2020” still tracks this
technology.
■ Decentralized web — “Hype Cycle for Blockchain Technologies, 2020” still tracks this
technology.
■ DigitalOps — “Hype Cycle for Enterprise Architecture, 2020” still tracks this technology.
■ Edge AI — Five Hype Cycles still track this technology, including “Hype Cycle for Artificial
Intelligence, 2020.”
■ Edge analytics — Five Hype Cycles still track this technology, including “Hype Cycle for
Analytics and Business Intelligence, 2020”
■ Emotion AI — Eight Hype Cycles still track this technology, including “Hype Cycle for Sensing
Technologies and Applications, 2020.”
■ Flying autonomous vehicles — Eight Hype Cycles still track this technology, including “Hype
Cycle for Connected Vehicles and Smart Mobility, 2020.”
■ Graph analytics — Three Hype Cycles still track this technology, including “Hype Cycle for
Analytics and Business Intelligence, 2020.”
■ Immersive workspaces — Two Hype Cycles still track this technology, including “Hype Cycle for
the Digital Workplace, 2020.”

Page 10 of 65 Gartner, Inc. | G00450415


■ Knowledge graphs — Two Hype Cycles still track this technology, including “Hype Cycle for the
Digital Workplace, 2020.”
■ Light cargo delivery drones — Two Hype Cycles still track this technology, including “Hype
Cycle for Drones and Mobile Robots, 2020.”
■ Low Earth orbit satellite systems — Four Hype Cycles still track this technology, including
“Hype Cycle for Enterprise Networking, 2020.”
■ Personification — Three Hype Cycles still track this technology, including “Hype Cycle for
Privacy, 2020.”
■ Synthetic data — Three Hype Cycles still track this technology, including “Hype Cycle for Data
Science and Machine Learning, 2020.”
■ Transfer learning — “Hype Cycle for Data Science and Machine Learning, 2020” still tracks this
technology.

On the Rise

Authenticated Provenance
Analysis By: Avivah Litan; Svetlana Sicular

Definition: Authenticated provenance represents the authentication of assets that can be recorded
and tracked on the blockchain. The provenance of these assets can later be digitally verified by
blockchain network participants. There are many methods used to authenticate the provenance of
assets, depending on their nature and whether they are digital or physical goods.

Position and Adoption Speed Justification: Counterfeit physical goods and fake digital content
have become major national and health security threats at worse, and costly problems for
organizations at best. Blockchain provenance and asset tracking applications are being adopted to
address these issues, but these applications don’t address the problem of authenticating goods and
content initially recorded on the blockchain. The question remains “How do you know what you are
tracking on the blockchain is real to begin with”? The problem is made worse because on the
blockchain, garbage in means garbage forever, since it can never be modified or deleted due to the
immutable ledger.

Users considering blockchain provenance applications are aware of this limitation. Significantly,
some regulators Gartner has spoken with have also noted the problem, which they say must be
addressed before blockchain can be used to authenticate provenance of goods, such as food or
pharmaceuticals.

Gartner believes provenance authentication solutions will be in more demand in the coming years,
as users adopt blockchain for provenance applications. They will become increasingly aware of the
need to digitally ‘certify’ the first mile, or onboarding of the goods or content being tracked on the
blockchain in the first place. For now, that certification relies on manual audits or human trust. That
is certainly not scalable. For example, human fact checkers cannot keep up with detecting fake

Gartner, Inc. | G00450415 Page 11 of 65


content, despite the growth in independent English language fact checks by more than 900% from
January to March 2020. See Reuters market study “Types, Sources, and Claims of COVID-19
Misinformation.”

User Advice: CIOs, Enterprise Architects, Technology Innovation leaders and Application Leaders
responsible for applications and systems that generate and receive goods and content within their
organization should:

■ Adopt technologies that can digitally authenticate and verify provenance to prevent fake or
altered goods and content from being distributed and consumed.
■ Work with your data scientists and IT teams to establish and track provenance of goods and
content your organization produces and consumes, using supporting technologies like
Blockchain, AI and factory component quality assurance testing.
■ Work with peers and industry groups to form networks of active participants who can
collectively and more effectively combat fake goods and content. Start by fixing the problem in
your own organization.

Business Impact: The good news is that there are some emerging solutions on the market, for
example that rely on spectral imaging, AI models, and factory quality assurance testing to
authenticate the provenance of a goods or particles the technology can decipher and understand.
These types of technologies have been applied to authenticating diamonds, wheat supplies,
chemotherapy drugs and electronic components, and are getting promising results.

Intelligent automation that digitally authenticates and verifies content provenance is clearly and
urgently needed. The more organizations that participate, the more effective the solutions. This is
because content rarely stays within the confines of the environment in which it is produced, so
solving this thorny fake goods and content problem very much depends on the growth of the
network, including the authenticators and the verifiers that adopt the solutions.

Benefit Rating: High

Market Penetration: Less than 1% of target audience

Maturity: Embryonic

Sample Vendors: IBM; ThinkIQ

Recommended Reading: “Leverage Blockchain Developments as Catalysts for Strategic


Technology Planning Across the Supply Chain”

AI-Augmented Design
Analysis By: Brent Stewart

Definition: AI-augmented design is the use of artificial intelligence (AI), machine learning and
natural language processing technologies to automatically generate, and evolve via machine
learning, user flows, screen designs, content and presentation layer code for digital products.

Page 12 of 65 Gartner, Inc. | G00450415


Position and Adoption Speed Justification: AI-augmented design is in its infancy. Conceptually,
the design community sees the bold, fascinating — and even frightening — future AI-augmented
design will enable. Gartner expects to see AI at work in the digital product design platform market
(Adobe Xd, Figma, InVision) soon, leading to major leaps in efficiency, quality, and time to market. At
multiple companies, AI-augmented design is already transforming the customer experience through
decision support and personalization in CX products and site builder platforms like B12 have added
AI to assemble content and best practices for your business type in under a minute.

User Advice: Application leaders should:

■ Monitor developments in AI-augmented design, specifically at Adobe, followed by InVision.


■ Prepare digital product teams for the emergence of AI-augmented design, first through design-
to-code technology, followed by bots that produce high-fidelity screen designs and written
content.
■ Transition the role of humans in the design process from production-level creators to strategic
curators.

Business Impact: The potential business impact of AI-augmented design is tremendous. Imagine
the following scenario for creating an online store:

■ First, you tell the AI that you want an online store; the AI automatically generates the standard
structural elements of an online store from the homepage to product detail templates to the
shopping cart.
■ Next, you apply your style guide, giving the AI inputs on color, typography, iconography,
photographic style, etc.
■ Next, you provide some inspiration to the AI by indicating a set of stores you would like to
emulate.
■ Then, you hit submit and within minutes, the AI has produced three high-fidelity design
directions for you to evaluate and iterate upon.
■ Furthermore, every design element has an associated code component that is updated as you
tweak or curate the final design.

In a future powered by AI-augmented design, sites, apps and software will be generated in minutes
rather than days, weeks or months, and the resulting designs will be based on proven design
principles to ensure maximum usability and accessibility. In this future, the roles of production
designer, UX writer, and presentation layer developer are no longer needed. Instead, UX
practitioners will only need to tweak AI-generated designs and presentation layer code to be ready
for launch. As a result, UX teams will shrink and remaining practitioners will be focused on research,
strategy, and design curation (rather than design creation).

Benefit Rating: Transformational

Market Penetration: Less than 1% of target audience

Gartner, Inc. | G00450415 Page 13 of 65


Maturity: Embryonic

Sample Vendors: Adobe; InVision

Recommended Reading: “Artificial Intelligence Maturity Model”

“Hype Cycle for Artificial Intelligence, 2019”

“Predicts 2020: Artificial Intelligence Core Technologies”

DNA Computing and Storage


Analysis By: Nick Heudecker; Rajesh Kandaswamy

Definition: DNA computing and storage uses DNA and biochemistry to perform computation or
storage instead of silicon or quantum architectures. Digital data is represented as synthetic DNA
strands, loosely translating as memory and disk in traditional architectures, while enzymes provide
the processing capabilities. DNA computing relies on code stored in DNA strands and computing is
done through chemical reactions.

Position and Adoption Speed Justification: DNA computing and storage makes its debut on the
Hype Cycle after two triggering events. The first is the development of an end-to-end “DNA drive”
prototype created by Microsoft Research and the University of Washington, demonstrating the
viability of an all-in-one DNA storage solution. The second trigger is the successful storage of
English-language Wikipedia as DNA by CATALOG, a startup in the DNA computing space.

For DNA computing and storage to progress to the Plateau of Productivity, significant technical
barriers must be overcome. First, the creation of synthetic DNA, the medium used to store digital
data as DNA, must become much more efficient and cost-effective. Today, synthetic DNA is almost
entirely used in life sciences research and there hasn’t been a compelling need to lower costs and
increase output. The development of commercial computing using DNA may change that, triggering
a next-generation breakthrough for DNA synthesis similar to what we witnessed with DNA
sequencing. Another barrier is access speeds and throughput rates. Current access and throughput
for DNA technologies are orders of magnitude lower than traditional technologies. Lastly, effective
and efficient processing methods must be found, which are currently the subject of multiple
research organizations.

DNA computing advances today are for rudimentary logic and they need to mature to handle
complex logic and math. Further development is also needed to make the computing architectures
reprogrammable, one of the key conveniences of silicon-based computers.

If these barriers can be overcome, DNA computing should advance steadily through the Hype Cycle
curve, reaching the Plateau of Productivity in roughly 10 years. As these technologies progress, it is
likely that Gartner will split this topic into two innovation profiles: one for DNA-based storage and
another for DNA-based compute. At this early stage, we are combining these topics for clarity.

User Advice: We recommend the following:

Page 14 of 65 Gartner, Inc. | G00450415


■ Begin evaluating the viability of DNA-based storage by gauging when storage prices fall to three
to four orders of magnitude the cost of tape archival, and when write speeds reach the megabit
per second range.
■ Exploit early opportunities to use DNA data storage for product-centric uses, such as
embedding DNA tags into products to ensure authenticity and provenance.
■ Monitor technology innovation in the DNA storage and computing space around cost and
performance breakthroughs and venture capital investment for an appropriate time to begin
proof of concept testing.

Business Impact: As DNA computing and storage matures, the impact will be transformational for
data storage, processing parallelism and computing efficiency. While unsuitable for every computing
task, DNA computing potentially lends itself to graph and machine learning inference, as well as
unstructured search and digital signal processing.

Benefit Rating: Transformational

Market Penetration: Less than 1% of target audience

Maturity: Embryonic

Sample Vendors: Microsoft; Twist Bioscience

Recommended Reading: “Maverick* Research: The Future of Computing Is DNA”

Low-Cost Single-Board Computers at the Edge


Analysis By: Tony Harvey

Definition: Low-cost single-board servers are small low-cost general-purpose systems that perform
functions such as filtering data like anomaly detection or AI inferencing like image recognition at the
edge. Based on a system-on-chip (SoC) solution, single-board servers are designed with the
minimum capability to perform the tasks required. I/O interfaces will vary, but at a minimum will
include a wired or wireless network. The operating environment will be based around a micro OS,
VMs and containers to enable rapid delivery of updates.

Position and Adoption Speed Justification: Single-board servers at the edge are a relatively new
development for processing data and delivering AI inferencing at the edge. Initially, the market was
driven by the introduction of very low-cost single-board general purpose computers such as the
Raspberry Pi. Now, the market has expanded with open-source microcontroller-based system like
Arduino, and AI inferencing systems such as the Texas Instruments BeagleBone AI, and the NVIDIA
Jetson Nano.

Unlike larger edge servers, which are generally repackaged x86 servers, single-board edge servers
are fixed configuration single-board systems based on ARM CPUs. Opportunities exist for other
CPU architectures such as x86 and RISC-V but the dominance of ARM in the SoC space will make

Gartner, Inc. | G00450415 Page 15 of 65


it difficult for other chip architectures to meet the power and performance requirements to succeed
in this space.

While the profusion of vendors and low costs associated with the hardware make prototyping and
development very easy, the lack of standards and as importantly the lack of security features will
make enterprise usage less likely. As the opportunities grow so will demand, for a more secure
solution and a more standardized software environment for developers.

User Advice: Evaluate the use of single-board edge servers for edge projects where a large number
of low-cost devices will be required to provide data processing, image recognition, voice recognition
or AI inferencing capabilities. Expect this market to evolve rapidly over the next few years with
improved performance and new capabilities being rolled out at a rapid cadence. Choose single-
board edge servers that can be rolled out rapidly, without skilled staff on-site, and that can easily be
managed and updated in the field. Security should be built into the system and potential vendors
should be evaluated for security across all areas including, physical, data storage, communications,
management, and updates. Integration with existing Internet of Things (IoT) and artificial intelligence
(AI) frameworks should also be considered when selecting a single-board edge server.

Business Impact: Single-board edge servers can help enterprises realize the potential of the large
pool of data that is generated at the edge. The ability to use this data has significant potential to
generate cost savings, for example, by allowing real-time image processing to recognize faulty or
damaged items on manufacturing lines. It also helps develop new areas of business that will be
enabled through real-time data processing at the edge.

Enterprises that do not adopt single-board edge servers may find themselves left behind as
enterprises that successfully integrate these systems into their digital transformation strategy will
lower their costs and deliver new services to market faster.

Benefit Rating: High

Market Penetration: 1% to 5% of target audience

Maturity: Emerging

Sample Vendors: Coral; NVIDIA; Raspberry Pi Foundation (Raspberry Pi); Texas Instruments (TI)

Recommended Reading: “Top 10 Strategic Technology Trends for 2020: Empowered Edge”

“How to Overcome Four Major Challenges in Edge Computing”

“Why and How I&O Should Lead Edge Computing”

Self-Supervised Learning
Analysis By: Pieter den Hamer; Erick Brethenoux

Definition: Self-supervised learning is an approach to machine learning in which labeled data is


created from the data itself, without having to rely on external (human) supervisors that provide

Page 16 of 65 Gartner, Inc. | G00450415


labels or feedback. It is inspired by the way humans learn through observation, gradually building
up general knowledge or “common sense” about concepts and their relations in the real world.

Position and Adoption Speed Justification: Self-supervised learning aims to overcome one of the
biggest drawbacks of supervised learning: the need to have access to typically large amounts of
labeled data. This is not only a practical problem in many organizations with limited relevant data or
where manual labeling is prohibitively expensive. It is also a more fundamental problem in current
AI, in which the learning of even simple tasks requires a huge amount of data, time and energy. In
self-supervised learning, labels can be generated from relatively limited data. In essence, this is
done by masking elements in the available data (e.g., a part of an image, a sensor reading in a time
series, a frame in a video or a word in a sentence) and then training a model to “predict” the missing
element. Thus, the model learns, for example, how one part relates to another, how one situation
(captured through video and/or other sensors) typically precedes or follows another, and which
words often go together. In other words, the model increasingly represents the concepts and their
spatial, temporal or other relations in a particular domain. This model then can be used as a
foundation to further fine-tune the model — using “transfer learning,” for example — for one or
more specific tasks with practical relevancy.

User Advice: Self-supervised learning is an important candidate enabler for a next main phase in
AI, overcoming the limitations and going beyond the current dominance of supervised learning. Self-
supervised learning has only recently emerged from academia and is currently only practiced by a
limited number of innovative AI companies. In practice, it is worth considering when available data
volumes are limited or when the benefits of the ML solution do not outweigh the costs of manual
labeling or annotating of data. However, it currently depends very much on the creativity of highly
experienced ML experts to carefully design a self-supervised learning task, based on masking of
available data, allowing a model to build up knowledge and representations that are meaningful to
the business problem at hand. Tool support is still virtually absent, making implementation a
knowledge-intensive and low-level coding exercise.

Business Impact: The potential impact and benefits of self-supervised learning are very large, as it
will extend the applicability of machine learning to organizations that do not have the availability of
large datasets. It may also shorten training time and improve the robustness and accuracy of
models. Its relevancy is most prominent in computer vision, natural language processing, IoT
analytics/continuous intelligence, robotics or other AI applications that rely on data that is typically
unlabeled. For AI companies, self-supervised learning has the potential of bringing AI closer to the
way humans learn: mainly from observation, building up general knowledge about the world through
abstractions and then using this knowledge as a foundation for new learning tasks, thus
incrementally building up ever more knowledge.

Benefit Rating: Transformational

Market Penetration: Less than 1% of target audience

Maturity: Embryonic

Sample Vendors: Craftworks; Facebook; Google; Microsoft

Gartner, Inc. | G00450415 Page 17 of 65


Recommended Reading: “3 Types of Machine Learning for the Enterprise”

“Five Ways Artificial Intelligence and Machine Learning Deliver Business Impacts”

Health Passport
Analysis By: Brian Burke; Arnold Gao

Definition: Health passports are a pandemic/epidemic response technology implemented as mobile


apps that indicate the level of infection risk of the holder. They are used to gain access to buildings,
supermarkets, restaurants, public spaces and transportation.

Position and Adoption Speed Justification: In February 2020, Alipay and WeChat worked with
government to launch a national “Health Code” in China, which is required to gain access to many
public and private spaces and services. Health Code is widely used as a screening tool to minimize
the risk of COVID-19 transmission. It provides the user with a color QR code based on their
designated health status: Red is confirmed infected with COVID-19; Yellow should be in quarantine;
and Green is free to travel. Health Code checks are very common, making it difficult to move
without having a green code. In India, travelers must be marked “safe” on the Aarogya Setu app for
travel by rail and air. In May 2020, the UAE launched the ALHOSN UAE app which also provides a
unique QR color code (red = infected, yellow = quarantine, green = OK, gray = not tested) but was
not being used to gain access to places at the time of writing.

As these technologies have been introduced in the past months, there will be rapid evolution in this
space. There are many social obstacles to these technologies, most importantly is the restriction of
personal freedoms and privacy. Social acceptance of the technology will be very much based on
the culture of the society where it is introduced. One large issue is being able to provide some type
of health passport to people without a mobile phone — alternative methods are required.

User Advice: Trust and transparency will be key to the acceptance of any health passport. Having
clarity into the algorithms used to generate the color code is of paramount importance. Simplicity is
also a key attribute as people will not want to use different health passports to gain access to
different locations and services. Many people will view these health passports as assurance that
they are at low risk of infection from the people around them in public places. But many people will
view these apps as limiting their freedom, even discriminatory.

Ideally, these health passports would be managed by public health services but there is a risk that
organizations may take things into their own hands if authorities don’t act quickly. Employers,
schools, airports (among others) all have a keen interest in providing a low risk environment for their
employees and visitors, and in fact may be legally liable if they fail to do so. These organizations
may implement their own health passport, creating a challenge for people to manage many different
health passports.

Governments are also eager to reopen travel for foreign visitors and a health passport may help to
achieve that if there is trust between the issuing authority and the destination country/region. The
standards for maintaining a trusted code will evolve over time and may include periodic viral tests,

Page 18 of 65 Gartner, Inc. | G00450415


antibody tests, quarantine history and travel history. Interoperability will be a key requirement to use
a health passport for travel, but standards for interoperability are nonexistent today.

Business Impact: Health passports would help to enable all locations to begin accepting visitors
with a lower level of risk, opening doors to end lockdowns and helping to restore confidence and
rebuild the global economy. This will be a great benefit to businesses and all organizations as a lack
of confidence will prevent or minimize use of services. Health passports will also have a positive
effect on reducing new infections overall as simply moving around becomes impossible without a
green code, people in quarantine will be effectively forced to stay home, eliminating the need for
separate quarantine enforcement technologies that are in use in some countries.

Harmonized health passport systems will also be highly desirable to enable international travel for
both business and pleasure.

Benefit Rating: Transformational

Market Penetration: 5% to 20% of target audience

Maturity: Emerging

Sample Vendors: Alipay; Bizagi; Circle Pass Enterprises; Folio; Vottun; WeChat

Recommended Reading: “Use Gamification to Flatten the Curve of COVID-19 Infections”

Bidirectional Brain-Machine Interface


Analysis By: Sylvain Fabre; Annette Jump

Definition: Bidirectional brain-machine interfaces (BMIs) are brain-altering neural interfaces that
enable two-way communication between a human brain and computer or machine interface.
Bidirectional BMIs allow not only monitoring of the user’s EEG (electro encephalogram) and mental
states, but also some action to be taken to modify that state based on analytics and insights. Brain
state modification occurs via noninvasive electro stimulation through a head-mounted wearable, or
an invasive implant. When connected, these enable the IoB (Internet of Brains).

Position and Adoption Speed Justification: It is still very early days for bidirectional BMI. There
are already applications of one-way BMI wearables, where the focus is about monitoring the state of
the user or using the user intent to operate some external device — but without trying to externally
modifying the user’s mood. Some of these solutions even measure the response and attitude of
consumers to products and companies.

In September 2019, Facebook acquired the neural interface startup Ctrl-labs for over $500 million,
and will work to include the technology as a computer interface and in AR/VR consumer products
using Facebook Reality Labs.

In 2017, DARPA also awarded a $65 million contract to develop a bidirectional BMI.

Gartner, Inc. | G00450415 Page 19 of 65


In order to estimate the progress for bidirectional BMI, it is worth noting that a related earlier trend,
smart wearables, experienced significant hype in 2016 through 2018, particularly fueled by interest
in consumer smart wearables devices and software. However, venture capital investments in
companies developing smart wearable products and solutions decreased from $2.8 billion in 2018
to $1.6 billion in January 2019 through October 2019. This return to 2017 investment levels
highlights the shift in VC investors’ evaluation of opportunities and potential markets for some smart
wearables. The 2019 decline in VC smart wearable investment underscores the issues linked with
smart wearable devices, to include high cost, slow consumer adoption, high drop-off rates for some
smart wearables, and the complexity of integration between various data systems.

Since bidirectional BMIs are more advanced and extreme form of wearable (in effect, an implant
with bidirectional connectivity), the above trend provides some guidance as to what needs to occur
to allow a wider adoption of bidirectional BMI. Namely, it will need to become more affordable and
find ways to add functionality without added invasiveness.

An early application is from NYX Technologies, currently in beta testing phase, which aims to use
neurotechnology to both monitor and stimulate brain function and improve sleep.

User Advice: Enterprises should be prepared for future creeping of bidirectional BMI devices into
enterprises; BYOD may occur before specific legislation is in place, so business leaders should:

■ Ensure customer safety and business security by implementing data anonymity and privacy
(beyond GDPR) for brain-wearable data in products.
■ Highlight trade-offs when promoting wellness solutions.
■ Take responsibility: Set up a steering board to monitor products sold to consumers and
provided for employees. Preempt potential legal liability by regularly reviewing implanted
wearables features and their use cases and deciding on what is acceptable in terms of read/
write from and to users’ brains.
■ Establish policies for unauthorized implantables: While they cannot easily be removed, users
may be prohibited from some roles such as operating vehicles or machinery (as BYOD
bidirectional BMI implants would pose risks similar to drugs or stimulants).

Business Impact: There are multiple form factors for devices designed to be worn or implanted to
sense the human body, such as smartwatches, head-mounted displays (HMDs), ear-worn
wearables or hearables, wristbands, smart rings, smart garments, smart contact lenses,
exoskeletons, implants and ingestibles. Over the next three to 10 years, they will enable business
use cases including: authentication, access and payment; immersive analytics and workplace; and
control of power suits or exoskeletons.

What is unique about bidirectional BMI is that it is a brain-altering class of wearables/implantables.


In addition to the use cases mentioned above, we now look at bidirectional connectivity for the
brain. For example, stimulation applied to boost alertness in response to markers of fatigue in a
worker’s EEG, or relaxing cortical currents applied to the brain of a teacher or nurse showing signs
of irritability. This creates very specific ethical and security challenges, because they are a direct
interface to the human brain.

Page 20 of 65 Gartner, Inc. | G00450415


Bidirectional BMIs are the front line of innovation that powers human augmentation. They are
designed to exhibit some level of autonomy when connection to the internet is not available or
desirable. They are also designed to learn using machine learning (ML), interact with the
environment around the wearer, enhance human abilities, and connect humans to the Internet of
Things (IoT) and the Internet of Brains (IoB).

As a result, direct read-and-write access to brain activity creates many opportunities for workforce
enablement. It also provides new vulnerabilities to individuals and their companies by adding a
vector of attack and human factor issues such as altering users’ perception of reality, or even their
personality.

Benefit Rating: High

Market Penetration: Less than 1% of target audience

Maturity: Emerging

Sample Vendors: BrainCo; Facebook; Kernel; Neuralink; Neuroelectrics; NeuroMetrix; NYX; Omron
Healthcare

Recommended Reading: “Maverick* Research: Mass Adoption of Brain-Monitoring and -Altering


Wearables Creates Risk of Mind Control”

“Emerging Technology Analysis: Smart Wearables”

“Venture Capital Growth Insights: Smart Wearables”

Generative Adversarial Networks


Analysis By: Brian Burke

Definition: Generative adversarial networks (GANs) are composed of two neural network models, a
generator and a discriminator that work together to create original simulations of objects such as
videos, images, music and text (poetry, journalistic articles, marketing copy) that replicate authentic
objects or their pattern, style or essence with varying degrees of quality or realism. GANs can also
be used in an inverse design process to generate models of novel drug compounds or new
materials with targeted properties.

Position and Adoption Speed Justification: Originally proposed by Ian J. Goodfellow in 2014, this
technology is in a nascent state, with most applications coming from research labs. Commercial
applications have just started being explored. The algorithms require a lot of manual tuning to make
them perform in the desired manner, and development of the technology is constrained by the
extremely limited resources that have knowledge in this area. As commercial applications become
more commonplace, the technology will improve as the benefits are significant.

GANs can be used for both good and bad purposes. They are commonly used to create images of
people who don’t exist (deep fakes), to create fake political videos, to compose music and poetry. In

Gartner, Inc. | G00450415 Page 21 of 65


2018, an image produced by a GAN was sold at an auction for $432,500. While these “novelty”
applications are prominent, research is underway to apply these algorithms to far more valuable
challenges such as generating marketing content, graphic designs, creating simulated environments
for training autonomous vehicles and robots and generating synthetic data to train neural networks
and to protect privacy. GANs are also being used in inverse design to create targeted
pharmaceutical compounds and materials with specific properties.

User Advice: Technology innovation leaders in high-risk-tolerant organizations should evaluate the
potential for leveraging this technology today, and partner with universities to conduct proof of
concepts where the potential benefits and drawbacks are significant. Tech innovation leaders
should do their due diligence and consider the fact that while the core technologies are readily
available in the public domain, the technology is brittle, resource hungry and requires significant
(and rare) AI skills. They should also focus on other pressing issues such as explainability, as GANs
are “black boxes” and there is no way to prove the accuracy of the objects produced other than by
subjective methods.

Business Impact: The powerful idea is that deep neural network classifiers can be modified to
generate realistic objects of the same type. GANs have the potential to impact many creative
activities from content creation (art, music, poetry, stories, marketing copy, images and video) to
many types of design (architecture, engineering, drug, fashion, graphic, industrial, interior,
landscape, lighting, process). GANs might also be used to create simulations where actual data may
be difficult to obtain (training data for machine learning) or pose a privacy risk (medical images for
health data) or be costly to produce (backgrounds for video games). GANs have the potential to
augment humans’ talents for many creative tasks across many industries. GANs are part of a group
of generative AI methods (including variational autoencoders [VAE], recurrent neural networks [RNN]
and reinforcement learning [RL]), which are being used in inverse design. In material science,
inverse design turns the material discovery process on its head by starting with defining the
properties of the target material and analyzing the chemical space to generate a material with the
required properties.

Benefit Rating: Transformational

Market Penetration: Less than 1% of target audience

Maturity: Embryonic

Sample Vendors: Amazon; Apple; Autodesk; DeepMind; Google; Insilico Medicine; Landing AI;
Microsoft; Neuromation; NVIDIA

Recommended Reading: “How to Benefit From Creative AI — Assisted and Generative Content
Creation”

“Top 10 Strategic Technology Trends for 2020: AI Security”

“Innovation Tech Insight for Deep Learning”

Page 22 of 65 Gartner, Inc. | G00450415


Biodegradable Sensors
Analysis By: Michael Shanler

Definition: Biodegradable sensors are thin-film sensors manufactured using nontoxic materials that
can go into common waste streams. The primary application is for microsensing for food
monitoring. Some of these sensors are bioresorbable, meaning they can be ingested. Others are
biocompatible, meaning they can be implanted into medical devices or pharmaceutical products
before dissolving or harmlessly passing after ingestion.

Position and Adoption Speed Justification: This is a new innovation profile for 2020.
Biodegradable sensors are a relatively old concept within academia, dating back to the 1950s, but
few research institutions were able to manufacture and design them for the right price points for use
in products at scale until recently. Over the last five years, multiple research institutions in
Switzerland, the U.S., the U.K., Japan and Korea have pushed biodegradable sensors to the point
where they are ready for industry use. Leveraging advanced design and simulation principles,
polymer science, and green technologies made this advance possible.

Today, biodegradable sensors can be designed to perform a variety of specific functions. They
operate as detectors for changes to pH, humidity, oxidization, gasses, glucose, antibodies, and
chemicals. Others are manufactured as RFID tags — with carbon electrodes printed on paper.
Some circuits are printed to be used as repeaters for both active and passive sensor technology.
These sensors are often manufactured by embedding chips or sandwiching sensors in between
thin-film polylactic acid (PLA) or dissolvable silicon, and are produced using corn and potato starch.
PLA and related biofilm and green plastics are harmless and biodegrade over time. Compositions
comply with U.S. and EU food legislation and label requirements.

The sensors embedded into the material may not be fully biodegradable, but they are designed
using nontoxic materials that can exist within the human body at low levels, even when
accumulating over time (such as molybdenum, magnesium, zinc, silicon dioxide and nitride). Some
use RFID-related technologies. Others are powered by the substrate or products in which they are
embedded. The sensors often can operate for a few weeks before eroding. They are designed to go
to waste in traditional landfills. Most of these sensor formats are smaller than a grain of rice;
however, research organizations are actively miniaturizing them even further.

Gartner has observed several prototypes at companies and some initial use cases, but beyond
small commercial offerings (such as Proteus Digital Health), the technology has yet to be scaled to
the masses. These sensors have a lot of potential to change the way food, retail and medical
devices are monitored and used; however, we only envision success for when used at scale when
manufacturers hit the right price points and margins.

This is a newly commercialized technology by a handful of vendors; thus, Gartner places this
embryonic technology in the Innovation Trigger phase.

User Advice: Evaluate the advantage of biodegradable sensors and how they may dovetail with
smart product or Internet of Things (IoT)-enabled product strategies. Drivers could be product

Gartner, Inc. | G00450415 Page 23 of 65


quality, tracing, authenticity or performance. CIOs must also plan building in the IoT data ingestion
and analytics capabilities required to deliver business value from sensors. Specifically:

■ CIOs and CTOs in the food and beverage, consumer, and retail industries should evaluate using
these sensors for tracking product quality “use by dates,” locations, unique identifiers and
performance (such as pH, oxidation, taste and degradation) of fruits, meats, grains and
vegetables. This activity must also include potential impacts on product margins and the cost of
goods sold. These sensors can be affixed to the inside of outer packaging (cereal boxes),
affixed to product labels (such as biocompatible RFID stickers on premium apples) or even
embedded into the products themselves (inside ground beef).
■ CIOs and CTOs in the healthcare and life science space should evaluate bioabsorbable and
biocompatible iterations for the IoT and sensing potential for both drugs and devices. CIOs and
technology leaders must evaluate the sensors while accounting for the downstream device
regulatory requirements (for example, 510K class I, II, and III submission) and determine what is
required to put them into production.

Before investing, life science companies must outline with a clear vision what is required to make
these sensors work in their highly regulated manufacturing, supply chain and distribution channels.
Teams must determine where new policy, systems and business processes are needed to support
serialization, unique identifiers and safety systems. They also must determine early on whether the
sensors are considered part of software as a medical device, companion device and/or digital
therapeutics.

Business Impact: These sensors can add data to augment the customer or distribution channel,
and have effects on smart supply and logistics, with added capabilities for measuring real-time
physical, chemical and biologic functions.

These sensors could help streamline the product life cycle and provide data for location,
serialization, product quality, tampering and product performance. There will be useful benefits from
combining sensor data with informatics and operational systems for R&D, quality, regulatory,
manufacturing and supply chain, or other specialized areas (such as clinical, diagnostics and
safety). Specifically, these sensors can dovetail with smart products, IoT analytics and sensor-
enabled business models. These sensors can also be used to support smart manufacturing, as well
as adaptive supply chains and distribution channels.

Benefit Rating: High

Market Penetration: Less than 1% of target audience

Maturity: Embryonic

Sample Vendors: c2renew; EPFL; ETH Zurich; Grolltex; imec; Murata; NanoScale Systems; Proteus
Digital Health

Recommended Reading: “Top Consumer Food and Beverage Trends for 2020”

“Major Consumer Value Shifts Driving Marketing Changes in the Food and Beverage Industry”

Page 24 of 65 Gartner, Inc. | G00450415


“Scaling Digital Commerce Into a Digital Platform Business”

“The Gartner Supply Chain Top 25 for 2019”

Differential Privacy
Analysis By: Van Baker

Definition: Differential privacy is a system for sharing information about a dataset while withholding
or distorting certain information elements about individuals in the dataset. The system uses an
exact mathematical algorithm that randomly inserts noise into the data and ensures that the
resulting analysis of the data does not significantly change whether the individual’s data is included
or not.

Position and Adoption Speed Justification: As sensitive electronic data is stored, inadvertent
exposure of personal data via analytics is a risk. Additionally, hackers may gain access to more
databases holding individual information that is potentially damaging if revealed or used against
them. This increasingly puts enterprises at risk of legal liability if they don’t protect this data. One
defense against this is the use of differential privacy systems. This effectively delivers the same
analytic results whether individual’s data is included in the dataset or not. Differential privacy
systems use probabilistic randomization of the data elements in the dataset to make it impossible
for malicious actors to reverse engineer those data attributes and tie them to a specific individual.
While not specifically designed to prevent reidentification attacks, differential privacy does
effectively protect against these attacks. Differential privacy does have a weakness if the algorithm
is repeatedly applied to the data.

User Advice: Differential privacy systems should be employed when datasets have significant value
to be extracted but the information contains sensitive individual information or legally protected
information. It can be applied to any information that is associated with personally identifiable
attributes or is defined as sensitive information under a data protection regulation. The increasing
sophistication of attacks against data repositories will make the use of differential privacy systems
and other methods such as data encryption increasingly necessary for organizations holding
personally identifiable information. Organizations looking to monetize their data assets containing
personally identifiable information will find themselves under increasing scrutiny. A failure to employ
differential privacy and other data protection mechanisms will likely increase the organizations’
exposure to legal proceedings and potentially damaging financial penalties:

■ Enterprises holding sensitive data assets with personally identifiable information should explore
the use of differential privacy systems to decrease the likelihood they will expose sensitive data
that can be tied to individuals.
■ Enterprises should also take other measures to protect data assets containing sensitive data
that contains personally identifiable information.
■ Organizations should not assume that differential privacy systems alone are enough to prevent
breach of sensitive information.

Gartner, Inc. | G00450415 Page 25 of 65


■ Enterprises operating in high-performance environments and also requiring a high-level of
precision in their models are encouraged to compare this approach with other privacy
preserving compute platforms.
■ Differential privacy can be applied at the data level or the group level and should be deployed
appropriately.
■ Differential privacy can be used in isolation but is best used with other methodologies to best
protect individual information and the enterprise.

Business Impact: Businesses increasingly recognize the value in data such as customer
information in CRM databases however they also find themselves liable for protection of this
sensitive personal data. Regulations that define how enterprises may use this information are
increasingly being defined and implemented and the liability for leaking such information can be
substantial. In addition, the reputation of the business and the trust associated with the business
can be significantly damaged by breaches of sensitive information. This will require businesses to
use whatever means available to protect datasets containing sensitive information. This exposure is
not limited to the datasets in control of the business as malicious actors can increasingly combine
data sources to reidentify individuals even if the data used by the business is anonymized. As such,
businesses should employ whatever means available to protect personally identifiable information
from being exposed.

Benefit Rating: High

Market Penetration: 1% to 5% of target audience

Maturity: Emerging

Sample Vendors: 01Booster; Hazy; LeapYear

Recommended Reading: “Boost Your Training Data for Better Machine Learning”

Private 5G
Analysis By: Sylvain Fabre; Joe Skorupa

Definition: Private 5G is defined as a private mobile network (PMN) based on 3GPP 5G to


interconnect people/things in an enterprise. CSPs and TSPs have the potential to offer 5G PMN to
various verticals, such as Industry 4.0, mining, oil, utility and rail road companies; IoT service
providers, university campuses, stadiums/arenas etc. Private 5G offerings provide a separate
network from the public network. It can include voice, video, messaging, and broadband data and
IoT/M2M use cases with higher performance requirements.

Position and Adoption Speed Justification: Most enterprises use cases that justify a need to
deploy a Private Mobile Network (PMN) that requires cellular connectivity for mobility, will be
adequately served with a 4G network. Where 5G is justified, a PMN can provide the required
functionality earlier than what the local CSPs may have deployed on their public infrastructure.
There is another class of use cases however, not focused on mobility but requiring a high
performance backbone where wiring is complex and costly — as in a factory deployment. In that

Page 26 of 65 Gartner, Inc. | G00450415


case, support of autonomous delivery vehicles on the shop floor, could apply, but as a
complementary application. Some verticals may adopt 5G sooner as a response to the current
COVID-19 pandemic, driven by cost optimization, resiliency and automation concerns.

Volkswagen is reported to plan 5G private deployments in 122 German factories in 2020.

An early implementations example is: BMW Brilliance Automotive (BBA, BMW’s JV in China) claims
complete 5G coverage in all of its factories.

User Advice: Enterprises looking to deploy 5G PMN should:

■ Seek out quotations not only from CSPs but also other possible providers such as large
equipment vendors and smaller specialist
■ Consider SIs and consultancies for design, deployment and managed services.
■ Consider licensed and also unlicensed/shared spectrum options where available (for example
5G was approved for use in CBRS in February 2020 such as CBRS; German regulator
Bundesnetzagentur (BNetzA) has allocated spectrum in the 3.7 GHz to 3.8GHz band for
industrial local 5G).
■ CSPs offering Private 5G to industrial buyers need to work with IT service providers that have
the required industry skills (and knowledge of other technologies) to provide a value proposition
of how 5G supports a specific use case or business KPI to justify the additional investments
required to make existing infrastructure 5G ready. For example, in manufacturing 5G is seen as
a platform that combines connectivity with security and AI capabilities.

Business Impact: While 5G standard are defined by 3GPP, other bodies are contributing in order to
improve applicability to connected industrial applications, for example 5G-ACIA (Alliance for
Connected Industries and Automation). 5G PMN can offer enterprises improved security and
independence and enable efficiency gains in several manufacturing and industrial processes.

An early implementations example is: BMW Brilliance Automotive (BBA, BMW’s JV in China) claims
complete 5G coverage in all of its factories.

Benefit Rating: High

Market Penetration: Less than 1% of target audience

Maturity: Emerging

Sample Vendors: Athonet; Ericsson; Huawei; Nokia

Recommended Reading: “4 Hype Cycle Innovations That Should Be on the Private Mobile
Networks Roadmap for 5G Security, CSP Edge and Slicing”

“Market Impact: How CSPs Can Rebuild Resilience and Business Continuity During Disruption”

“Cool Vendors in Communications Service Provider Network Operations”

Gartner, Inc. | G00450415 Page 27 of 65


Small Data
Analysis By: Jim Hare; Pieter den Hamer

Definition: The concept of “small data” indicates both the issue and approach on how to train AI
models when small amounts of training data are enough or there is insufficient or sparse training
data. There are a variety of strategies and data augmentation techniques to overcome the problem
such as simulation, synthetic data, transfer learning, federated learning, self-supervised learning,
few-shot learning and knowledge graphs.

Position and Adoption Speed Justification: Supervised deep learning that started the current AI
hype is already fulfilling its promise, but it needs a lot of labeled data. Unlike consumer internet
companies, which have data from billions of users to train AI models, collecting massive training
sets in most enterprise is often not feasible. Also, most data science teams are not in a position to
develop and train complex supervised models from scratch due to resource limitations. Moreover,
reducing the need for big data and the ability to use small data, results in AI solutions that are more
resilient and agile to handle changes. For example, the COVID-19 virus has resulted in many
production AI models across different industry verticals to lose accuracy because they were trained
using big data that reflected how the world worked before the pandemic hit. Retraining models
using the same approach was not feasible, because more recent data of just a few weeks old are
too limited to reflect the patterns of the new market circumstances. As a result, data scarcity has
emerged as a major challenge, even more so with organizations becoming dependent on AI to run
their businesses, also in times of disruption.

There is a growing number of data science innovations and open source projects focused on
different data augmentation or other techniques. Among others, graph techniques have garnered
new attention because of the ability to find patterns in small data, or to reduce dimensionality,
complementing machine learning. Several new AI startups have created platforms and solutions
that operate on small datasets.

User Advice: Data and analytics leaders whose teams are experiencing data scarcity issues in
exploring new AI use cases, building hypotheses, or handling production models that have lost their
accuracy should consider this approach first:

■ Simpler Models — Replacing more complex models with simpler, classical ML models such as
linear regression, support vector machines, K-nearest neighbors, and naïve Bayes that can be
trained on small amounts of data. Proper feature engineering, the use of simpler models or
ensembles thereof, should be in the toolbox of any data scientist, especially in the case of small
data.

If replacing existing models with simpler models is not feasible, consider these emerging data
augmentation and modeling approaches:

■ Transfer Learning — Enables AI solutions to learn from a related task where there is ample data
available and then uses this knowledge to help overcome the small data problem. For example,
an AI solution learns to find damaged parts from 1,000 pictures collected from a variety of
products and data sources. It can then transfer this knowledge to detect damaged parts in a
new product using just a few pictures.

Page 28 of 65 Gartner, Inc. | G00450415


■ Federated Learning — Enables collaborative ML by sharing local model improvements at a
central level, where the central model combines locally trained or retrained models on small
data in a decentralized environment. For example, when a hospital wants to develop a model
for treating a condition, but has limited data, it trains the model on its own local data. It then
passes this model to the next hospital that keeps training the model on its own data and so on,
combining the model improvements. It also increases data privacy as no local data needs to be
shared centrally.
■ Synthetic Data — Used to generate data to meet very specific needs or conditions that are not
available in existing authentic data. Can be useful when either privacy needs limit the availability
or usage of the data or when the data needed to train a model does not exist.
■ Self-supervised Learning — A relatively recent ML technique where the training data is
autonomously (or automatically) labelled. The datasets are labelled by finding and exploiting the
correlations between different input signals. Production models can continuously be learning in
production making self-supervised learning well suited for changing environments.
■ Few-shot Learning — Classifies new data having seen only a few training examples. This forces
the AI to learn to spot the most important patterns since it only has a small dataset. Useful
when training examples are hard to find or where the cost of labelling data is high.
■ Other approaches include the sharing of scarce data between organizations, together building a
larger set, and the use of reinforcement learning, where data is gathered through simulations or
experimentation.

Business Impact: Small data techniques enable organizations to manage production models that
are more resilient and able to adapt to major world events like the pandemic or future disruptions.
These techniques are ideal for AI problems where there are no big datasets available. Using smaller
amounts data allows data scientists to use more classical machine learning algorithms that provide
good-enough accuracy but without the need for big data training sets. It can also speed up the
business exploration and model prototyping for novel solutions, as this approach reduces the time,
compute power, energy and costs to collect, prepare or label large datasets.

Benefit Rating: High

Market Penetration: Less than 1% of target audience

Maturity: Emerging

Sample Vendors: Diveplane; Google (Cloud AI); Landing AI; MyDataModels; OWKIN

Recommended Reading: “3 Types of Machine Learning for the Enterprise”

“A Guidance Framework for Operationalizing Machine Learning”

“Boost Your Training Data for Better Machine Learning”

Gartner, Inc. | G00450415 Page 29 of 65


Adaptive ML
Analysis By: Pieter den Hamer; Erick Brethenoux

Definition: Adaptive machine learning (ML) is the capability of frequently retraining ML models
when online in their runtime environment, rather than only training ML models when offline in their
development environment. This capability allows ML applications to adapt more quickly to changing
or new real-world circumstances that were not foreseen or available during development.

Position and Adoption Speed Justification: Adaptive ML gets AI much closer to self-learning, or
at least to more frequent learning, compared with most current AI applications which only use static
ML models that depend on infrequent redeployment of new model updates to improve themselves.
Adaptive ML, also known as continuous learning, is technically challenging for several reasons,
including:

■ User feedback or closed loop data about the quality of the ML output, e.g., prediction errors, is
required to enable reinforcement learning for updating the model parameters while online.
■ Less frequent model updates can already be achieved by the current approach of offline
retraining, using the full set of available training data, and periodic model update deployments.
With adaptive ML there is no time to fully retrain the model. Instead, the model must be
incrementally retrained online, using only new or most recent data, which requires incremental
learning algorithms that are different from offline learning algorithms that typically rely on large
batches of (historical) data.
■ Adaptive ML must be tuned in terms of weighting new data versus older data that was used for
earlier online or offline training and other challenges such as preventing overfitting and proper
testing and validation, at least periodically.
■ Nontechnical challenges include ethical, societal, reliability, liability, safety and security
concerns that come with self-learning and autonomous systems.

User Advice: Organizations should consider the use of adaptive ML for one or more of the following
reasons:

■ The ever increasing complexity, pace and dynamics in the environment, society and business,
require ML models that frequently adapt to changing circumstances and impactful events. This
is most relevant in real-time application areas like continuous intelligence, streaming analytics,
decision automation and augmentation in a myriad of industries and business areas.
■ Adaptive ML is a key enabler of autonomous systems such as self-driving vehicles or smart
robots that should be able to operate in their ever changing contexts.
■ With adaptive ML, models remain accurate longer and suffer less from model drift. Data science
teams can improve their productivity by leveraging adaptive ML to reduce the need for
conventional model monitoring, retraining and redeployment. This will reduce the time needed
for ModelOps/MLOps.

In addition:

Page 30 of 65 Gartner, Inc. | G00450415


■ Adaptive ML should be considered by organizations not to replace but to complement current
ML. Most adaptive ML applications will start out with a model that was first trained offline.
Adaptive ML can be seen as a way to further improve, maintain, contextualize, personalize or
fine-tune the quality of ML models, once online.
■ Adaptive ML can be used to compensate for limited availability of training data or ‘small data’,
hindering offline (e.g., supervised or reinforcement) learning during development. Adaptive ML
may start out with a minimal viable model that was pretrained offline, with the model then
incrementally improved during the actual online usage.
■ Adaptive ML must be accompanied by model monitoring for accuracy and relevancy and also
by proper risk analysis and risk mitigation activities, if only to frequently monitor the quality and
reliability of adaptive ML applications. Even with adaptive ML, a periodic offline full retraining of
the model may be required, as incremental learning has its limitations.
■ Organizations should actively manage talent, infrastructure and enabling technology that is
specifically required for adaptive ML. For example, adaptive ML is likely to be more demanding
in terms of compute power in runtime environments and will require the development of
knowledge about new (incremental learning) algorithms and tools.

Business Impact: The main impact of adaptive ML is to respond more quickly and effectively to
change, enabling more autonomous systems that are responsive to the dynamics of both gradual
change and massive disruptions. For example, the COVID-19 pandemic has resulted in significant
changes in market circumstances, requiring adaptation of existing ML models to maintain their
accuracy. Adaptive ML is most relevant in areas in which context and conditions or in which the
behavior or preferences of actors change frequently. Example application areas include customer
churning in highly competitive markets, gaming, organized crime fighting and anti-terrorism, fraud
detection, cyber security, quality monitoring in manufacturing, virtual personal assistants,
semi(autonomous) cars and smart robotics.

Benefit Rating: Transformational

Market Penetration: Less than 1% of target audience

Maturity: Emerging

Sample Vendors: Cogitai; Guavus; IBM; Microsoft; Tazi

Recommended Reading: “3 Types of Machine Learning for the Enterprise”

“How to Operationalize Machine Learning and Data Science Projects”

“Machine Learning Training Essentials and Best Practices”

Composite AI
Analysis By: Pieter den Hamer; Erick Brethenoux

Gartner, Inc. | G00450415 Page 31 of 65


Definition: Composite AI refers to the combined application of different AI techniques to improve
the efficiency of learning, to increase the level of “common sense” and ultimately to much more
efficiently solve a wider range of business problems.

Position and Adoption Speed Justification: Composite AI is currently mostly about combining
“connectionist” AI approaches like deep learning, with “symbolic” and other AI approaches like rule-
based reasoning, graph analysis, agent-based modeling or optimization techniques. Composite AI
aims to synergize these approaches, both from a pragmatic engineering perspective (improving the
effectiveness of AI) and from a more profound scientific perspective (progressing our knowledge
about artificial intelligence). The ideas behind composite AI are not new, but are only recently truly
materializing. The goal is to enable AI solutions that require less data and energy to learn and which
embody more “common sense,” thus bringing AI closer to human learning and intelligence. In
addition, composite AI recognizes that neither deep learning nor graph analytics or more “classical”
AI techniques are silver bullets. Each approach has its strengths and weaknesses; none is able to
resolve all possible AI challenges.

User Advice: AI leaders and practitioners should:

■ Identify projects in which a fully data-driven, ML-only approach is unviable, inefficient or ill-
fitted. For example, this is the case when not enough data is available, when training a deep
learning network requires large amounts of data, time and energy, or when the required type of
intelligence is very hard to represent in current artificial neural networks.
■ Leverage domain knowledge and human expertise to provide context to and complement data-
driven insights, by applying decision management with business rules, knowledge graphs or
physical models in conjunction with machine learning models.
■ Combine the power of deep learning in data science, image recognition or natural language
processing with graph analytics to add higher-level, symbolic and relational intelligence (for
example, spatiotemporal, conceptual or common sense reasoning).
■ Extend the skills of data scientists and machine learning experts, or recruit/upskill additional AI
experts, to also cover graph analytics, optimization or other required techniques for composite
AI. In the case of rules and heuristics, skills for knowledge elicitation and knowledge
engineering should also be available.
■ Since composite AI is still emerging, be cautious of the fact that the benefits of composite AI
can only be achieved through the creative artisanship of AI experts, while avoiding the
disadvantages and weaknesses of each underlying AI technique.

Business Impact: Composite AI offers two main benefits in the short term. First, it brings the power
of AI to a broader group of organizations that do not have access to large amounts of historical or
labeled data but do possess significant human expertise. composite AI is one of the strategies to
deal with “small data.” Second, it helps to expand the scope and quality of AI applications, in the
sense that more types of reasoning challenges and required intelligence can be embedded in
composite AI. Other benefits, depending on the techniques applied, include better interpretability
and the support of augmented intelligence. There are many possible examples:

Page 32 of 65 Gartner, Inc. | G00450415


■ A heuristic or rule approach can work together with a deep learning network in AI for predictive
maintenance. Rules, coming from human engineering experts, or the application of physical/
engineering model analysis may specify that certain sensor readings are likely to indicate
inefficient asset operations, which then can be used as a feature to train a neural network to
assess and predict the asset health. Typically, such a combination is much more effective than
relying only on heuristics or only on a fully data-driven approach.
■ In computer vision, (deep) neural networks are used to identify or categorize people or objects
in an image. This output can then be used to enrich or generate a graph, which represents the
image entities and their (spatiotemporal) relationships. This enables answering questions like
“which object is in front of another,” “what is the speed of an object” and so on. Using a
connectionist approach only, such seemingly simple questions are extremely hard to answer.
■ In supply chain management, a composite AI solution can be composed of multiple agents,
with each agent representing an actor in the ecosystem, typically having its own intelligence to
monitor local conditions and machine learning to make predictions. Combining these agents
into a “swarm” enables the creation of a common situation awareness, more global planning
optimization and more dynamic, responsive scheduling.

In the longer term, composite AI has the potential to pave the way for more generic and intelligent
AI solutions with profound impact on business models, although still a far cry from the elusive
artificial general intelligence.

Benefit Rating: Transformational

Market Penetration: 1% to 5% of target audience

Maturity: Emerging

Sample Vendors: ACTICO; Beyond Limits; BlackSwan Technologies; Cognite; Exponential AI;
FICO; IBM; Indico; Petuum; ReactiveCore

Recommended Reading: “How to Use Machine Learning, Business Rules and Optimization in
Decision Management”

“Combine Predictive and Prescriptive Analytics to Drive High-Impact Decisions”

“Leverage Augmented Intelligence to Win With AI”

Generative AI
Analysis By: Svetlana Sicular; Avivah Litan; Brian Burke

Definition: Generative AI is a variety of ML methods that learn a representation of artifacts from the
data, and use it to generate brand-new, completely original, realistic artifacts that preserve a
likeness to the training data, but do not repeat it. Generative AI can produce novel content (images,
video, music, speech, text — even in combination), improve or alter existing content and create new
data elements.

Gartner, Inc. | G00450415 Page 33 of 65


Position and Adoption Speed Justification: The hype around generative AI is heating up due to its
sensational successes and huge societal concerns. According to Adweek, patent filings for
generative AI have grown 500% in 2019. Christie’s auction house already sells AI-generated
artwork. More practical applications, like differential privacy and synthetic data, are increasingly
drawing enterprises’ attention.

AI methods that directly extract numeric or categorical insights from data are relatively widespread.
Generative AI, which creates original artifacts or reconstructed content and data, is the next frontier.
So far, it is less ubiquitous and with fewer use cases. The hype around Generative AI is growing due
to a recent notable progress of Generative Adversarial Networks (GANs), invented in 2014, and
language generating models, such as Bidirectional Encoder Representations from Transformers
(BERT), introduced in 2018, and Generative Pre-trained Transformer 2 (GPT-2) introduced in 2019.
Other quickly progressing generative AI methods include self-supervised learning, variational
autoencoders and autoregressive models.

Regrettably, generative AI technologies underpin “deep fakes,” content that is dangerous in politics,
business and society. Prominent organizations, such as Partnership on AI and DARPA, are pursuing
detection of “deep fakes” to counteract fraud, disinformation, instigation of social unrest and other
negative impacts of generative AI. In 2020, “deep fakes” are not yet pervasive among the fake
content and news spread across the web, but Gartner expects this to rapidly change in the next five
years.

User Advice: Data and analytics leaders should evaluate generative AI for the following purposes:

■ Creative AI, a large subcategory of generative AI to produce art and work that typically requires
imagination, for example, Adobe Sensei for visual arts and OpenAI Jukebox for music.
■ Content creation, such as text, images, video and sound. Content creation already penetrates
marketing, for example, producing personalized copywriting. Twenty-nine percent of marketing
leaders rank generative content creation among the top three, according to the 2019 Gartner
Marketing Technology Survey.
■ Content improvements, such as rewriting the outdated text, background noise cancelation,
increasing image resolution, and modifying photos by adjusting, removing or adding artifacts.
■ Data creation, often known as synthetic data, to mitigate data scarcity or privacy barriers to
insight. Generative techniques create new data instances, so the generated data repeats
patterns of the actual data, but is completely made up. For example, text generation for
chatbots, image generation for quality analysis in manufacturing, differential privacy. Visma
generated for the Norwegian Labour and Welfare Administration the entire population of Norway
preserving demographic nuances.
■ Industry applications in retail, healthcare, life sciences, telecommunications, media, education
and HCM. For example, in healthcare, generative AI could create medical images that depict the
future development of a disease. In consumer goods, it can generate catalogs. In e-commerce,
it can help customers to “try-on” various makeups and outfits.

Page 34 of 65 Gartner, Inc. | G00450415


■ Gartner recommends that software companies that produce generative AI include methods to
preclude their software from being used to generate fake content before releasing the software,
delivering the antidote immediately in version 1.0.

Organizations must prepare to mitigate the impact of deep fakes, which can cause serious
disinformation and reputational risk. There are several methods evolving to do this including
algorithmic detection and tracing content provenance.

Business Impact: More use cases will surface and proliferate. The field of generative AI will
progress rapidly, both scientific discovery and technology commercialization. Reproducibility of AI
results will be challenging in the near term. Other technologies, especially those that provide trust
and transparency, could become an important complement to the generative AI solutions.

Full and accurate detection of generated content will remain challenging for years and may not be
completely possible. To do so will require elevating critical thinking as a discipline in the
organization. Technical, institutional and political interventions combined will be necessary to fight
deep fakes. We will see unusual collaborations, even among competitors, to solve the problem of
deep fakes and other ethical issues rooted in generative capabilities of AI.

Benefit Rating: Transformational

Market Penetration: Less than 1% of target audience

Maturity: Emerging

Sample Vendors: Adobe (Sensei); Bitext; Dessa; Google (DeepMind); Landing AI; LeapYear;
OpenAI; Phrasee; Spectrm; Textio

Recommended Reading: “Innovation Tech Insight for Deep Learning”

“How to Benefit From Creative AI — Assisted and Generative Content Creation”

“Cool Vendors in AI Core Technologies”

“Cool Vendors in Speech and Natural Language”

“Cool Vendors in Natural Language Technology”

Packaged Business Capabilities


Analysis By: Yefim Natis

Definition: Packaged business capabilities (PBCs) are encapsulated software components that
represent a well-defined business capability, recognizable as such by a business user. They inherit
some characteristics from both microservices (encapsulation and domain-driven design) and
monolithic applications (self-contained and deliver clear and complete business value), but are more
business-oriented than former and more adaptive than latter. Complete vendor applications may be
delivered as assemblies of PBCs.

Gartner, Inc. | G00450415 Page 35 of 65


Position and Adoption Speed Justification: PBCs are a foundational technology resource of the
composable enterprise (see “Innovation Insight for Packaged Business Capabilities”). They act as
the building blocks for rapid composition and recomposition of application experiences. And when
combined with the democratized application composition tools, empower application innovation by
multi-disciplinary fusion teams, IT professionals and business technologists (see “2020 Strategic
Roadmap for the Future of Applications”). Fully-expressed PBCs encapsulate a business entity
(e.g., a bank account) and are exclusive owners of the entity’s data. They provide the complete set
of APIs and event channels to facilitate the entity’s entire life cycle (e.g., open, close, deposit,
withdrawal, lookup and all other applicable bank account actions). Basic PBCs may represent a
single atomic business function (e.g., bank account deposit), therefore having limited autonomy.
Data and analytics PBCs deliver reference information and researched insights, respectively.

The full fruition of the composable enterprise model comes when both PBCs and democratized
composition tools become widely available. Today, there are already multiple precursors to both
PBCs and composition tools, supporting partial implementation of composable enterprise. Visionary
application vendors, sensing customers’ demand for greater self-expression in application
experiences, are evolving through API catalogs to PBC renditions of their application services.
Today’s PBC precursors include API-centric (“headless”) SaaS (e.g., Twilio), API Products and
marketplaces (e.g., RapidAPI), banking services (e.g., Solaris) or API aggregators (e.g., Plaid),
prebuilt integrations (e.g., Cloud Elements), Business “microservices” (e.g., Finastra APIs) and
business APIs (e.g., SAP Business API Hub). The composition platform precursors include the low-
code application platforms (e.g., Mendix), business process management suites (e.g., Appian) and
integration PaaS (e.g., Dell Boomi).

As the COVID-19 pandemic disruption forces organizations to increase their resilience, many turn to
the model of composable enterprise to drive agility, efficiency, scalability and democratization into
their application environment. To progress in that direction, organizations prioritize business-
modularity of vendor applications and begin to manage their API and low-code resources as
strategic investments and with that are pushing the notion of the PBCs toward the Peak of Inflated
Expectations.

User Advice: Application leaders, in collaboration with CIOs, responsible for strategic business
change in their organizations should:

■ Prioritize mastery in API management, integration, business-IT collaboration and democratized


tooling to achieve preparedness for operating a composable enterprise experience.
■ Reject any new monolithic solutions proposed by vendors or in-house developers, and plan to
renovate or replace the old ones to begin to move to composable application experiences.
■ Accelerate product-style delivery of application capabilities packaged as building blocks for
application assembly, using agile and DevOps techniques over traditional methods.
■ Build a technology portfolio of democratized tool capabilities in support of development,
integration/assembly and governance of composed application experiences.
■ Give preference to visionary application vendors that anticipate the architecture of composable
enterprise and deliver applications, ready for customers’ subset/superset recompositions.

Page 36 of 65 Gartner, Inc. | G00450415


■ Transform the culture of the IT organization from its nearly exclusive focus on strategic software
development to the role of partner and source of strategic guidance, support, service and some
software development for the business-led technology innovation.

Business Impact: Adoption of PBCs enables operation of the composable enterprise, which in turn
delivers resilience, efficiency, agility and democratization to business. But even alone, without the
other key components of the future of applications (fusion teams and democratized technology),
transition from the constraints of monolithic applications or fragmentation of technical APIs to the
granularity of business-defined composable components advances the ability of organizations to
innovate faster, safer and smarter.

Benefit Rating: High

Market Penetration: 1% to 5% of target audience

Maturity: Emerging

Sample Vendors: commercetools; Contentful; Elastic Path; finreach solutions; Finastra; Mambu;
Plaid; SAP; Stripe; Twilio

Recommended Reading: “Application Leaders: Master Composable Enterprise Thinking for Your
Post-COVID-19 Reset”

“2020 Strategic Roadmap for the Future of Applications”

“Innovation Insight for Packaged Business Capabilities and Their Role in the Future Composable
Enterprise”

“Future of Applications: Delivering the Composable Enterprise”

“Top 10 Trends in PaaS and Platform Innovation, 2020”

“Predicts 2020: Application Leaders”

“The Applications of the Future Will Be Founded on Democratized, Self-Service Integration”

“Apply the Principles Behind the Future of Applications to Digital Commerce”

Citizen Twin
Analysis By: Alfonso Velosa; Marty Resnick

Definition: A digital twin of a citizen is a virtual representation of an individual. Governments use


citizen twins to support new or enhanced citizen services or government missions such as
pandemic or safety management. The citizen twin has model, data, a unique one-to-one
association, and monitorability. It integrates data into the twin from siloed public and commercial
sources such as health records, social media, phone location logs, and physical infrastructure such
as cameras and wearables.

Gartner, Inc. | G00450415 Page 37 of 65


Position and Adoption Speed Justification: Governments are increasingly developing digital twin
models of citizens to monitor and help address health, safety, travel, membership, and social media
impacts on society. The citizen twin can be used to build profiles, personas, and scores helping
stakeholders make decisions, such as aligning medical treatment, managing transportation
resources, or taking sensor data to try to understand the health of passengers arriving on an
airplane. Aggregated versions of the anonymized citizen twin will be used to understand broader
societal patterns, drive government resource allocation and utilization, and impact societal behavior.

Precursors already exist. In western countries, financial organizations provide citizens with credit
rating scores. Retailers model shoppers. China has a citizen social credit system. A variety of airport
and retail vendors are developing passenger and shopper tracking solutions.

User Advice: CIOs need to help their governments or enterprises take advantage of this emerging
trend to serve citizens and customers better. At the same time, CIOs must protect their citizens,
governments, and enterprises from miss-use of citizen data. Key steps include:

■ Transparently develop robust privacy and digital ethics policies


■ Establish clear benefits to citizens such as certifying children in a classroom are all healthy or
simplifying medical triage to get a citizen to medical care.
■ Develop sensor and IoT monitoring capability.
■ Invest in integration skills to connect into a diverse set of data sources.
■ Use AI to build and& test the usefulness of a variety of citizen-twin-based scores.

Business Impact: Governments’ safety initiatives will increasingly aggregate citizen data across the
world, as they seek to serve citizens, to protect them from pandemics or other crises. This will have
a range of key impacts, including:

■ Increased debates over privacy and the merits of government access to citizen data, although
this has been difficult due to politization in a variety of western countries.
■ Expect scope creep as government bureaucracies increase the types and quantity of data
collection.
■ Government curation of aggregated citizen data a security risk for government data and
possibly a safety risk for the individual citizen.
■ There will be increased regulation to balance the government use of the data with the citizens’
respective rights to privacy.
■ As governments work to collect more data on citizens, this may drive a dialogue to get more
services and other financial benefits in return to citizens, but it will expose a lack of integration
skills across data sources — and political infighting over data siloes.

Benefit Rating: High

Market Penetration: Less than 1% of target audience

Page 38 of 65 Gartner, Inc. | G00450415


Maturity: Embryonic

Sample Vendors: Alibaba Cloud; Apple; Google; Tencent; VANTIQ

Recommended Reading: “Getting Started With a Digital Twin of Government”

“Top 10 Plausible Directions Resulting from COVID-19”

Digital Twin of the Person


Analysis By: Marty Resnick; Alfonso Velosa

Definition: A digital twin of the person (DToP) not only mirrors a unique individual but is also a near-
real-time synchronized, multipresence of the individual in both digital and physical spaces. This
digital instantiation (or multiple instantiations) of a physical individual continuously intertwines,
updates, mediates, influences, and represents the person in multiple scenarios, experiences,
circumstances, and personas.

Position and Adoption Speed Justification: A simple DToP is already being used for medical and
biotech use cases. For example, analyzing healthcare plans, preventative care, wellness and
disease control uses a rudimentary DToP to predict future medical costs. Furthermore, the Citizen
Digital Twin (a “social” subset of a DToP) is being used to help address health, safety, travel,
membership, and social media impacts on society. The impact of DToPs will continue to grow in
areas such as education, remote working, consumer shopping, gaming and social media.

User Advice: The “avatar” has often been considered a digital representation of someone in various
situations, however, the avatar is just a visualization or digital rendition of the person and is not
typically synchronized to the physical person it is linked to. But what really makes a DToP different
is the role of the twin as a near-real-time proxy for the state or characterization of the physical twin,
and the various levels of data fidelity that make this representation effective to achieve particular
outcome. Outcomes range from monitoring a potentially hazardous declining health condition, for
aberrant social behavior, or for safety in hazardous working conditions.

High fidelity situations (high level of data, high visualization) would include the ability to be
represented in the following situations:

■ Social experience
■ Business meeting
■ Consumer shopping
■ Gaming

Lower fidelity (high level of data, low visualization):

■ Medical
■ Safety

Gartner, Inc. | G00450415 Page 39 of 65


■ Healthcare
■ Consumer 360
■ Human resources

Enterprises should begin to adopt the concept of DToP to facilitate more collaborative and engaging
remote working situations, understanding and predicting customer demands, and accelerate new
business models reliant on digital representations of people. Enterprises must develop strong digital
ethics, security, and data governance policies to protect customer, employee and citizen privacy
and data, while meeting legal and other compliance requirements.

Business Impact: Digital twin of the person opens up new and emerging business models but also
opens up the door for additional security, privacy and ethical considerations. Currently, precursors
or early versions of digital twin of the person are used for medical, e-commerce and social
monitoring. But as the concept expands, new citizen services, medical care and sales options will
bring in a flood of experimentation by governments and commercial entities. Effective data-driven
decision making and testing out of various scenarios will be possible with less risk and in a much
more efficient way. New ways to serve citizens, patients, or shoppers will be enabled by real time
understanding of their situation. In parallel, enterprises with poor security and digital ethics policies
expose themselves to significant legal and regulatory risk.

For some enterprises, the critical link will be the connection between an asset and a person. The
digital twin of the asset (e.g., a smart meter) will be connected with the digital twin of the person
(e.g., a residential consumer) and may drive opportunities for serving the customer while driving
cost and process optimization and new revenue.

Benefit Rating: Transformational

Market Penetration: Less than 1% of target audience

Maturity: Embryonic

Sample Vendors: Amazon; Apple; Insight Enterprises; NTT; Philips Healthcare; ScaleOut Software;
Sim&Cure; Tencent

Recommended Reading: “Cool Vendors in Augmenting Human Experiences”

“Maverick* Research: Being Human 2040 — The Life of the Architected Human in a More-Than-
Human World”

Multiexperience
Analysis By: Jason Wong

Definition: Multiexperience describes the interactions across a variety of digital touchpoints (e.g.,
web, mobile apps, chatbots, AR/VR, wearables), using a combination of interaction modalities (e.g.,
no-touch, voice, vision, gesture) in support of seamless and consistent digital user journeys.

Page 40 of 65 Gartner, Inc. | G00450415


Multiexperience is part of a long-term shift from computers as individual devices we use to a
multidevice, multisensory and multilocation environment we experience.

Position and Adoption Speed Justification: Through 2030, the user experience (UX) will undergo
a significant shift in terms of how users experience the digital world. web and mobile apps are
already commonplace, but they are undergoing UX changes driven by new capabilities like
progressive web apps, WebXR and AI services. Conversational platforms allow people to interact
more naturally and effortlessly with the digital world. Virtual reality (VR), augmented reality (AR) and
mixed reality (MR) are changing the way people perceive the digital world. This combined shift in
both perception and interaction models leads to the future multisensory, multidevice and
multitouchpoint experience. Having the ability to communicate with users across many human
senses will provide a richer environment for delivering nuanced information.

The long-term manifestation of multiexperience (MX) is a unified digital experience that is seamless,
collaborative, consistent, personalized and ambient. This will happen over the next five years — and
is already accelerated by the COVID-19 pandemic, which has increased reliance in digital
touchpoints. Privacy concerns in particular, may dampen the enthusiasm and impact of adoption.
On the technical front, the long life cycles of many consumer devices and the complexity of having
many creators developing elements independently, will be enormous barriers to seamless
integration. Don’t expect automatic plug and play of off-the-shelf devices, applications and
services. Instead, proprietary ecosystems of devices will exist in the near term. Focus on
understanding how unified digital experiences impact the business and use evolving
multiexperience technologies to create targeted solutions for customers or internal constituencies.

User Advice: Application leaders should:

■ Identify three to five high-value proof-of-concept projects in which multiexperience design can
lead to more compelling and transformative experiences.
■ Use personas and journey mapping to address the requirements of diverse enterprise use
cases, including external-facing and internal-facing scenarios to support a unified digital
experience.
■ Collaborate with marketing/branding to educate the UX team on the brand strategy and identity;
ensure UX teams accurately apply visual, behavioral and written guidelines across all relevant
multiexperience touchpoints and modalities.
■ Establish a multidisciplinary core team potentially including but not limited to IT, business
leadership, HR, facilities management, UX, experience design and product.

Business Impact: Organizations are shifting their delivery models from projects to products, but
beyond products is the experience — the collection of feelings, emotions and memories.
Understanding and exploiting multiexperience is essential to the effectiveness of customer
experience (CX), employee experience (EX) and UX strategies. Multiexperience starts with a mindset
to remove friction and effort for the users — internal or external — through the contextual use of
digital technologies. Adopting this mentality will allow application leaders to better align with
business objectives and be more agile at delivering positive business outcomes. When CX, EX, UX

Gartner, Inc. | G00450415 Page 41 of 65


and MX strategies are executed with one another in harmony and synchronicity, you can deliver
transformative and memorable experiences for customers, employees and all users of your digital
products and services.

Benefit Rating: Transformational

Market Penetration: 1% to 5% of target audience

Maturity: Emerging

Recommended Reading: “Top 10 Strategic Technology Trends for 2020: Multiexperience”

“Transcend Omnichannel Thinking and Embrace Multiexperience for Improved CX”

“Build Links Between Customer Experience, Multiexperience, User Experience and Employee
Experience”

“Success in the Digital Experience Economy Requires Connecting MX, UX, CX and EX”

Responsible AI
Analysis By: Svetlana Sicular

Definition: Responsible AI is an umbrella term for many aspects of making the right business and
ethical choices when adopting AI that organizations often address independently. These include
business and societal value, risk, trust, transparency, fairness, bias mitigation, explainability,
accountability, safety, privacy and regulatory compliance. Responsible AI operationalizes an
organizational responsibility and practices that ensure positive and accountable AI development and
exploitation.

Position and Adoption Speed Justification: Responsible AI signifies the move from declarations
and principles to operationalization of AI accountability at the individual, organizational and societal
levels. While AI governance is practiced by designated groups, responsible AI applies to everyone
who is involved in the AI process. Organizations are increasing their AI maturity, which requires
defined methods and roles that operationalize AI principles. Lately, responsible AI has been elevated
to the highest organization levels by Accenture, Google, Microsoft, OpenAI, PwC, Government of
Canada, Government of India, the World Economic Forum (WEF) and more. Although responsible AI
is nascent in industries, pioneers include AXA, Bank of America, State Farm, Telefónica and Telus.

COVID-19 pandemic stressed the need for responsible AI, when all governments and the entire
world were following AI models of pandemic projections and economies’ reopening. Many AI
vendors and individual data scientists immediately shifted to solving pandemic problems, where
they had to balance vital deliverables and risks associated with privacy, ethics, abrupt data changes
and unconfirmed facts. Using AI for virus tracking, monitoring masks distribution and social
distancing are subjects of public debate regarding appropriate AI interpretation, transparent data
handling and clear exit plans for such temporary measures (see “How to Use AI to Fight COVID-19
and Beyond”).

Page 42 of 65 Gartner, Inc. | G00450415


User Advice: Data and analytics leaders, take responsibility — it’s not AI, it’s you who are liable for
the results and impacts, either intended or unintended. Extend existing mechanisms, like data and
analytics governance and risk management to AI to:

■ Establish and refine processes for handling AI-related business decisions.


■ Designate, for each use case, a champion accountable for the responsible development of AI.
■ Establish processes for AI review and validation. Have everyone in the process defend their
decisions in front of their peers and validators.
■ Provide guidelines to assess how much risk is appropriate.
■ Ensure that humans are in the loop to mitigate AI deficiencies.

Build bridges to those organizational functions that are vital to AI success, but poorly educated
about AI value and dangers to:

■ Open a conversation with security, legal and customer experience functions.


■ Build an AI oversight committee of independent, respected people.
■ Continuously raise awareness of AI differences from the familiar concepts. Provide training and
education on responsible AI, first to most critical personnel, and then to your entire AI audience.
■ Have an escalation procedure early on in case something goes wrong.
■ Anticipate human problems with AI: Identify enthusiasts who can help establish ongoing
education about responsible AI.

The biggest problem in AI adoption currently is mistrust in AI solutions and low confidence in AI’s
positive impact. Responsible AI helps organizations go beyond purely technical AI progress to more
successfully balance risk and value. With AI maturity, you will learn a lot and will make fewer
mistakes — remain humble and keep learning.

Business Impact: Societal impacts of AI are frequently depicted in a distorted way, either too
optimistically or as doom and gloom, while the responsible AI approach helps get a realistic view
and instills trust. AI, like no other technology, encompasses organizational and societal dangers that
have to be mitigated by responsible AI development and handling:

■ The way AI is developed will encompass the mandatory awareness and actions regarding all
aspects of responsible AI. Gartner predicts, “By 2023, all personnel hired for AI development
and training work will have to demonstrate expertise in responsible development of AI.”
■ New roles, from an independent AI validator to chief responsible AI officer are necessary and
are already being created to operationalize responsible AI at the organizational and societal
levels.
■ Responsible AI paves the way for new business models for creation of products, services or
channels. It forms new ways of doing business that will result in significant shifts in market or

Gartner, Inc. | G00450415 Page 43 of 65


industry dynamics via confirmed responsible AI actions and protocols; for example, a cross-
organizational effort to fight “deep fakes.”

Benefit Rating: High

Market Penetration: 1% to 5% of target audience

Maturity: Emerging

Recommended Reading: “Predicts 2020: AI and the Future of Work”

“AI Ethics: Use 5 Common Guidelines as Your Starting Point”

“Data Ethics and COVID-19: Making the Right Decisions for Data Collection, Use and Sharing”

“Top 10 Strategic Technology Trends for 2020: A Gartner Trend Insight Report”

AI-Augmented Development
Analysis By: Arun Batchu

Definition: AI-augmented development (AIAD) is the use of AI technologies such as machine


learning (ML), natural language processing (NLP) and similar technologies to aid application
development teams in creating and delivering applications faster, more consistently, and with higher
quality.

Position and Adoption Speed Justification: Application development is part science, engineering
and craft. The expanding diversity and complexity of building software for the digital business puts
a premium on business outcomes and continuous value delivery. However, reliance on human
expertise creates an upper limit on how fast we can design, create and test new software.
Handcrafted software, just like any other handicraft is inconsistent, which can be a problem in
creating mission-critical systems that cannot fail. Today’s application development methods involve
slow, repetitive and mundane tasks that sap at developers’ creativity, and drains their productivity.
Additionally, it takes a long time for a novice programmer to become a master engineer, further
exacerbating the shortage of critical application development skills.

AIAD attempts to help resolve these issues, by augmenting the development teams’ capabilities by
acting as a virtual co-developer, an expert coach and quality control inspector.

In 2019, two key AI technologies have tag-teamed to dramatically improve the quality of such AI-
augmented software development: Deep Learning (a special type of ML) and NLP. By treating
millions of lines of high quality open source software code as data, and leveraging the ubiquitous
availability of high-performance computing power, AI researchers and startups have demonstrated
remarkable AI developer “co-pilots.” These co-pilots are instantly able to predict entire lines of
code, detect quality problems (such as unsecure code) and, even fix them.

While AI is revolutionizing “high-control” software application development, a similar


metamorphosis is happening in “low-code” application development. Several vendors featured in
our Low Code Application Platform MQ and Multiexperience Development Platform MQ are

Page 44 of 65 Gartner, Inc. | G00450415


aggressively investing in AI-augmented capabilities. These capabilities include machine-learning-
driven recommendations that generate next best actions (such as workflows), AI coaches that teach
novices and application development virtual assistants.

Yet these technologies, while highly promising, are in their infancy. We don’t know enough about
their reliability, stability, scalability and generality. We don’t know if and how customizable the
models they generate can be. We don’t understand their failure modes completely. The impressive
models these technologies generate are opaque. Will we trust them without transparency? How do
we know they have not been tampered with, without provenance to prove their authenticity? How
do we know that the code they generate is not copyrighted or malicious? Indeed, AI researchers are
actively working on improving the technologies to resolve these and other issues.

Despite these challenges, early adopters could gain significant competitive advantage by embracing
these innovations today.

User Advice: Application leaders responsible to application development teams should:

■ Encourage their teams to experimenting with these tools today and adopt them when there is a
good fit.
■ Monitor how AI is transforming software development roles and prepare a learning and
development plan for your team accordingly.
■ If not already familiar, encourage your teams to learn how machine learning and other AI
technologies work, the challenges that come with them and how to mitigate them.
■ Engage with augmentation tool vendors to improve and co-develop useful features and
capabilities.

Business Impact: Unlike previous AI technologies that were brittle and static, today’s AI
technologies are general purpose technologies (GPT) and adaptive. Adaptive GPTs are
transformative, just like steam and electric technologies were in their era. Unlike steam and electric
technologies, today’s AI technologies increase in their capabilities proportional to the amount of
data and computing capacity available to them. Propelled by the rapid growth of software code, the
data generated by digital applications and cloud computing, these AI machines will gain capabilities
that will transform the software development life cycle in the next three to five years. We expect the
technology to pass through three stages. The first and current stage is where AI is able to help as an
apprentice, suggesting code fragments. The next stage is where the AI becomes smarter to act like
a peer to the developer. The third stage is the lead expert stage where the AI writes most of the
code with the developer tweaking as necessary.

This wave could reach tidal proportions or dissipate, like any emerging technology wave. You must
plan for it now for failing to plan might mean planning to fail.

Benefit Rating: Transformational

Market Penetration: 1% to 5% of target audience

Gartner, Inc. | G00450415 Page 45 of 65


Maturity: Emerging

Sample Vendors: Codota; Deep Code; Google; Kite; Mendix; Microsoft; OutSystems; Parasoft

Recommended Reading: “Innovation Insight for AI-Augmented Development”

“Top 10 Strategic Technology Trends for 2019: AI-Driven Development”

Composable Enterprise
Analysis By: Yefim Natis; Dennis Gaughan; Gene Alvarez

Definition: A composable enterprise designs its business models, technology architecture,


organization and partnership ecosystems in a modular manner, so that it can safely and rapidly
change (recompose) at any moment of need. Composable enterprise imposes a model of
application design that imagines applications as experiences assembled by or for its users from
vendor-provided and custom packaged business capabilities as the building blocks.

Position and Adoption Speed Justification: The core principles of the composable enterprise —
modularity, efficiency, continuous improvement and adaptive innovation — are familiar to most
organizations. Most organizations have been investing in improving their operation on each of these
parameters with some successes, but lacking a cohesive experience of a broad change. The model
of composable enterprise brings together these core characteristics and applies in equal manner to
managing of business models, organizational structures, ecosystem strategies, the ways of work of
the employees, and technology investments. The challenge to achieving consistent benefits of
composable enterprise across the organization is not any one particular investment, but the
essential underlying requirement for the pervasive practice of “composable enterprise thinking.”
This, fundamentally cultural, change — from the rigidity of the familiar enterprise structures to the
elasticity of active continuous change — is the most significant barrier to achieving the benefits of
composable enterprise.

The sudden disruption of the COVID-19 pandemic has woken up the leadership of every business
to the existentially critical importance of business resilience. In this context, business leaders and
technology vendors all are prepared to make strategic and radical changes to their operations,
practices, policies and cultural postures to become better prepared for the new and next business
disruptions. This strategic imperative builds a momentum for steady but fast adoption of the core
principles of composable enterprise, pushing it toward the Peak of Inflated Expectations and on to
the Plateau of Productivity.

User Advice: Application leaders, guiding their organizations in the process of digital
transformation, should:

■ Use composable enterprise thinking to innovate faster and safer, to reduce costs, and to lay the
foundation for business-IT partnerships.
■ Prioritize formation of business-IT fusion teams to facilitate faster, smarter and safer decisions
in navigating the business through current and future disruptions.

Page 46 of 65 Gartner, Inc. | G00450415


■ Assemble a democratized technology platform to best support the operation of fusion teams by
combining low-code composition/development tools with the traditional code-centric
integration/development technology.

Business Impact: Organizations that adopt the model of composable enterprise in their business,
technology and culture achieve a new level of resilience and a transformative access to innovation.
They move from the rigid and inefficient traditional normal of hierarchical thinking, to the active
agility of composable experience. Such organization assembles (integrates) its application
experiences from internal and external ecosystems of components (packaged business capabilities)
— to empower their organization to actively track and support the specific (and changing)
requirements of its users.

Benefit Rating: Transformational

Market Penetration: 1% to 5% of target audience

Maturity: Emerging

Recommended Reading: “Future of Applications: Delivering the Composable Enterprise”

“Application Leaders: Master Composable Enterprise Thinking for Your Post-COVID-19 Reset”

“2020 Strategic Roadmap for the Future of Applications”

“Innovation Insight for Packaged Business Capabilities and Their Role in the Future Composable
Enterprise”

“Top 10 Trends in PaaS and Platform Innovation, 2020”

“Predicts 2020: Application Leaders”

“The Applications of the Future Will Be Founded on Democratized, Self-Service Integration”

“Apply the Principles Behind the Future of Applications to Digital Commerce”

At the Peak

Data Fabric
Analysis By: Ehtisham Zaidi; Robert Thanaraj; Mark Beyer

Definition: A data fabric is an emerging data management design concept for attaining flexible,
reusable and augmented data integration pipelines, services and semantics, in support of various
operational and analytics use cases delivered across multiple deployment and orchestration
platforms. Data fabrics support a combination of different data integration styles and utilize active
metadata, knowledge graphs, semantics and ML to augment data integration design and delivery.

Gartner, Inc. | G00450415 Page 47 of 65


Position and Adoption Speed Justification: The data fabric — as a data management design
concept — is a direct response to long-standing issues now being aggravated by digital
transformation. These include the multiplicity of data sources and types, the soaring data volume,
the increasingly complexity of data integration and the rising demand for real-time insights. Simply
put, a data fabric is a design that leverages existing tools and platforms and adds metadata sharing,
metadata analysis and metadata-enabled self-healing along with orchestration and administration
tools to manage the environment. As a data fabric becomes increasingly dynamic, it evolves to
support automated data integration delivery. Data fabrics are almost at the Peak of Inflated
Expectations due to the hype in the market and the inherent confusion on how to deliver these. A
data fabric is not in itself a tool/platform that can be purchased — it is a design concept that
requires a combination of tools, processes and skill sets to deliver. Yet, we witness various tools
being developed and sold under the data fabric tag which do not provision all the requirements
needed to fulfill a data fabric. Not least the ability to integrate existing data integration technologies
together to deliver a dynamic data integration design that uses active metadata to auto-adjust to
new use-case requirements.

Data fabrics will, at the very least, need to collect all forms of metadata (not just technical metadata)
and then perform machine learning over this metadata to provide recommendations for integration
design and delivery. This capability is typically achieved through the augmented data catalog
capabilities of a data fabric. Advanced data fabrics have the capability to assist with graph data
modeling capabilities (which is useful to preserve the context of the data along with its complex
relationships), and allow the business to enrich the models with agreed upon semantics. Some data
fabrics come embedded with capabilities to create knowledge graphs of linked data and use ML
algorithms to provide actionable recommendations and insights to developers and consumers of
data. Finally, data fabrics provide capabilities to deliver integrated data through flexible data delivery
styles such as data virtualization and/or a combination of APIs and microservices (and not just ETL).
These are capabilities that together make up a data fabric and will mature over time as more
vendors move away from point-to-point and static data integration designs and adopt more
dynamic data fabrics.

User Advice: Data and analytics leaders looking to modernize their data management solutions
must:

■ Invest in augmented data catalogs. These will help you to inventory all types of metadata —
along with their associated relationships — in a flexible data model. Enrich the model through
semantics and ontologies that make it easier for the business to understand the model and
contribute to it.
■ Combine different data integration styles to incorporate a portfolio-based approach into the
data integration strategy (for example, not just ETL, but a combination of ETL with data
virtualization).
■ Establish a technology base for the data fabric and identify the core capabilities required before
making further purchases. Start by evaluating your current tools (such as data catalogs, data
integration, data virtualization, semantic technology and DBMSs) to identify the existing or
missing capabilities.

Page 48 of 65 Gartner, Inc. | G00450415


■ Invest in data management vendors which exhibit a strong roadmap on augmented capabilities,
i.e., embedded ML algorithms that can utilize metadata and provide actionable
recommendations to inform and automate parts of data integration design and delivery.

Business Impact: By leveraging the data fabric design, data and analytics leaders can establish a
more scalable data integration infrastructure that can provide immediate business impact and
enable new use cases, such as:

■ Data fabrics provide a much needed productivity boost to data engineering teams that are
struggling with tactical, mundane and often redundant tasks of creating data pipelines. Data
fabrics once enabled will assist data engineering teams by providing insights on data integration
design and will even automate repeatable transforms and tasks so that data engineers can
focus on more strategic initiatives.
■ Data fabrics also support enhanced metadata analysis to support data contextualization by
adding semantic standards for context and meaning (through knowledge graph
implementations). This enables business users to be more involved in the data modeling
process and allows them to enrich models with agreed upon semantics.
■ Over time, the graph develops as more data assets are added and can be accessed by
developers and delivered to various applications as needed. This allows organizations to
integrate data once and share multiple times thereby improving the productivity of data
engineering teams.
■ Data fabrics provide improved decisions for when to move data or access it in place. They also
provide the much sought-after capability to convert self-service data preparation views into
operationalized views that need physical data movement and consolidation for repeatable and
optimized access (in a data store such as a data warehouse, for example).

Benefit Rating: Transformational

Market Penetration: 1% to 5% of target audience

Maturity: Emerging

Sample Vendors: Cambridge Semantics; Cinchy; CluedIn; data.world; Denodo; Informatica;


Semantic Web Company (PoolParty); Stardog; Talend

Recommended Reading: “Data Fabrics Add Augmented Intelligence to Modernize Your Data
Integration”

“Augmented Data Catalogs: Now an Enterprise Must-Have for Data and Analytics Leaders”

“Modern Data and Analytics Requirements Demand a Convergence of Data Management


Capabilities”

“Top 10 Data and Analytics Technology Trends That Will Change Your Business”

“Magic Quadrant for Data Integration Tools”

Gartner, Inc. | G00450415 Page 49 of 65


“Critical Capabilities for Data Integration Tools”

Embedded AI
Analysis By: Amy Teng; Alan Priestley

Definition: Embedded AI refers to the use of AI/ML techniques within embedded systems to enable
analysis of locally captured data. This requirement is particularly critical for electronic equipment
where decision latency must be minimized for operational efficiency and safety. It can also enable
always-on use cases targeting battery-operated devices requiring low-power operation.

Position and Adoption Speed Justification: There is an increasing demand for embedded
systems to analyze and interpret the data they capture by leveraging AI/ML locally.

Virtually all major MCU vendors have expanded their toolchain to include compilers, model
conversion tools, libraries and application samples (such as object and gesture recognition) to
enable embedded AI. Additionally, the emergence of Tiny machine learning (tinyML) has encouraged
many new lightweight ML algorithms. In February 2020, Apple acquired an AI star-up, Xnor.ai,
focusing on BNN (Binarized Neural Network), which is a type of tinyML.

Vendors are also enhancing the AI capabilities of their embedded processors by integrating
hardware logic blocks into chips to optimize and advance inference performance. Renesas
Electronics has introduced a MPU with an embedded Dynamically Reconfigurable Processor (DRP),
a programmable on-chip logic block that can be reconfigured via firmware updates. This enables
the processor to be easily updated with the latest AI algorithms. NXP has a general-purpose MCU
with heterogeneous cores (ARM Cortex M33 and Cadence Tensilica HiFi 4 DSP) targeting audio/
video analytics applications.

ARM’s Cortex-M55 is the first Armv8.1-M based MCU core with Helium vector extensions focusing
at DSP/ML compute capabilities, and Ethos-U55 — the first micro-NPU that will co-work with
Cortex-M by providing configurable MCAs and weight compressions. These two technologies
together with ARM’s software development frameworks enable partners and developers to quickly
expand to embedded AI/ML applications by reusing current assets and experiences.
Semiconductor vendors are integrating these hardware IP block into their product lineups, and more
products are expected to be available for market adoption from 2021.

In addition to the forth mentioned vendor activities, we expect the market continue to vibrant
throughout the year, as a result we updated its position toward to the peak of HC.

User Advice: Adoption of embedded AI requires a clear workflow and vendor support on tools,
especially where the embedded system is used for real-time response and control. As the market is
at an early stage of adoption, IT leaders must:

■ Determine where (endpoint, edge or cloud) is best to execute AI based data analytics.
■ Identify the subset of applications in your OT system or product portfolios that can be
meaningfully impacted using embedded AI.

Page 50 of 65 Gartner, Inc. | G00450415


■ Evaluate the availability of reference designs that are close to your target application, chip
vendors, their solutions and design partners. Focus on their ability of translating and optimizing
your trained model into local systems.
■ Evaluate the process of updating algorithms — ensure no security vulnerability is created due to
changing designs.

Business Impact: Embedded AI enables devices to analyze captured data using AI/ML techniques
locally, reducing the need to transfer data to a remote data center for analysis. This can reduce
latency and enhance operational efficiency. Companies who own, sell or serve IoT and industrial
electronics, ranging from OT machines, factory equipment, IoT sensors to consumer electronics, will
be positively impacted depending on inclusion of and the value created by AI.

Initial justification will come from business cases focusing on first-order operational savings, e.g.,
predictive maintenance — these are the easiest and clearest to define. As adoption picks up,
Gartner expects to see additional value created through dynamic and real-time optimization of
manufacturing lines to incoming orders and workloads, intelligent buildings that optimize employee
productivity.

Benefit Rating: High

Market Penetration: 1% to 5% of target audience

Maturity: Emerging

Sample Vendors: Arm; Cartesiam; NXP Semiconductors; One Tech; Renesas Electronics;
STMicroelectronics

Recommended Reading: “Market Share Analysis: Microcontrollers, Worldwide, 2019”

Secure Access Service Edge (SASE)


Analysis By: Joe Skorupa; Neil MacDonald

Definition: Secure access service edge (SASE, pronounced “sassy”) delivers multiple capabilities
such as SD-WAN, SWG, CASB, NGFW and zero trust network access (ZTNA).

SASE supports branch office and remote worker access. SASE is delivered as a service, and based
upon the identity of the device/entity, combined with real-time context and security/compliance
policies. Identities can be associated with people, devices, IoT or edge computing locations.

Position and Adoption Speed Justification: SASE is driven by enterprise digital business
transformation: the adoption of cloud-based services by distributed and mobile workforces; edge
computing and business continuity plans that must include flexible, anywhere, anytime, secure
remote access. While the term originated in 2019, the architecture has been deployed by early
adopters as early as 2017. By 2024, at least 40% of enterprises will have explicit strategies to adopt
SASE, up from less than 1% at year-end 2018.

Gartner, Inc. | G00450415 Page 51 of 65


By 2023, 20% of enterprises will have adopted SWG, CASB, ZTNA and branch FWaaS capabilities
from the same vendor, up from less than 5% in 2019. However, today most implementations involve
two vendors (SD-WAN + Network Security), although single vendor solutions are appearing. Dual-
vendor deployments that have deep cross-vendor integration are highly functional and largely
eliminate the need to deploy anything more than a L4 stateful firewall in the branch office. This will
drive a new wave of consolidation as vendors struggle to invest to compete in this highly disruptive,
rapidly evolving landscape.

SASE is in the early stages of market development but is being actively marketed and developed by
the vendor community. Although the term is relatively new, the architectural approach (cloud if you
can, on-premises if you must) has been deployed for at least two years. The inversion of networking
and network security patterns as users, devices and services leave the traditional enterprise
perimeter will transform the competitive landscape for network and network security as a service
over the next decade, although the winners and losers will be apparent by 2022. True SASE services
are cloud-native — dynamically scalable, globally accessible, typically microservices-based and
multitenant. The breadth of services required to fulfill the broad use cases means very few vendors
will offer a complete solution in 2020, although many already deliver a broad set of capabilities.
Multiple incumbent networking and network security vendors are developing new or enhancing
existing cloud-delivery-based capabilities.

User Advice: There have been more than a dozen SASE announcements over the past 12 months
by vendors seeking to stake out their position in this extremely competitive market. There will be a
great deal of slideware and marketecture, especially from incumbents that are ill-prepared for the
cloud-based delivery as a service model and the investments required for distributed PoPs. This is a
case where software architecture and implementation matters

When evaluating SASE offering, be sure to:

■ Involve your CISO and lead network architect when evaluating offerings and roadmaps from
incumbent and emerging vendors as SASE cuts across traditional technology boundaries.
■ Leverage a WAN refresh, firewall refresh, VPN refresh or SD-WAN deployment to drive the
redesign of your network and network security architectures.
■ Strive for not more than two vendors to deliver all core services.
■ Use cost-cutting initiatives in 2020 from MPLS offload to fund branch office and workforce
transformation via adoption of SASE.
■ Understand what capabilities you require in terms of networking and security, including latency,
throughput, geographic coverage and endpoint types.
■ Combine branch office and secure remote access in a single implementation, even if the
transition will occur over an extended period.
■ Avoid vendors that propose to deliver the broad set of services by linking a large number of
products via virtual machine service chaining.

Page 52 of 65 Gartner, Inc. | G00450415


■ Prioritize use cases where SASE drives measurable business value. Mobile workforce,
contractor access and edge computing applications that are latency sensitive are three likely
opportunities.

Some buyers will implement a well-integrated dual vendor best-of-breed strategy while others will
select a single vendor approach. Expect resistance from team members that are wedded to
appliance-based deployments.

Business Impact: SASE will enable I&O and security teams to deliver the rich set of secure
networking and security services in a consistent and integrated manner to support the needs of
digital business transformation, edge computing and workforce mobility. This will enable new digital
business use cases (such as digital ecosystem and mobile workforce enablement) with increased
ease of use, while at the same time reducing costs and complexity via vendor consolidation and
dedicated circuit offload.

COVID-19 has highlighted the need for business continuity plans that include flexible, anywhere,
anytime, secure remote access, at scale, even from untrusted devices. SASE’s cloud-delivered set
of services, including zero trust network access, is driving rapid adoption of SASE.

Benefit Rating: Transformational

Market Penetration: 1% to 5% of target audience

Maturity: Emerging

Sample Vendors: Akamai; Cato Networks; Cisco; Citrix; iboss; Netskope; Open Systems; Palo Alto
Networks; VMware; Zscaler

Recommended Reading: “The Future of Network Security Is in the Cloud”

“Magic Quadrant for Cloud Access Security Brokers”

“Market Guide for Zero Trust Network Access”

“Market Trends: How to Win as WAN Edge and Security Converge Into the Secure Access Service
Edge”

“Quick Answer: Cost Effectively Scaling Secure Access While Preparing for a Remote Workforce”

Social Distancing Technologies


Analysis By: Leif-Olof Wallin; Nick Jones

Definition: Social distancing technologies help to encourage individuals to maintain a safe distance
from each other. Some of these technologies and solutions also provide contact tracing capabilities
if an individual is discovered to be infected. They can be implemented in many ways, including an
app on a smartphone, as a feature of a location tracking system, a dedicated wearable device or
using observational tools such as video analytics.

Gartner, Inc. | G00450415 Page 53 of 65


Position and Adoption Speed Justification: Social distancing technologies have emerged as
tactical solutions to help organizations and individuals deal with the COVID-19 pandemic. Many of
these technologies use wireless systems for proximity detection, but in principle, any technology
that can measure location or proximity can be used to support social distancing. All such systems
are imperfect, and face challenges such as accuracy, reliability, user acceptance, privacy concerns
and, in the case of smartphone solutions, the challenges of supporting an app on a very wide range
of consumer devices. However, despite these challenges, we expect them to be a useful tactic to
reduce risk in the pandemic. As most such systems are based on modifications of existing
technologies, we expect rapid maturity — within two years.

User Advice: Organizations that need to manage risk as staff return to work after the pandemic
should consider social distancing technologies because, despite their limitations, any form of risk
reduction is better than none. Industrial, construction and blue collar workers who may not carry
smartphones in their normal working environment may benefit from dedicated proximity-warning
devices, or equipment such as smart hard hats that have been modified to track proximity. Staff in
office-based environments may benefit from app-based solutions. Organizations with
comprehensive endpoint management in place will be best equipped to rapidly deploy these tools
onto users’ devices with minimal friction, as they typically have UEM technologies and a well-
defined hardware base. Most organizations will use social distancing technologies in conjunction
with processes such as reducing the number of employees in offices and establishing behavior and
visual guidelines. Some app-based solutions may be superseded or augmented by national social
distancing app initiatives, or apps from megavendors such as Google and Apple.

Social distancing technologies cannot provide a guarantee against infection, so organizations


should set realistic expectations for the effectiveness of such tools. All are likely to generate false
negatives and positives. It’s likely that app-based systems will be less accurate than dedicated
wearables. Those deploying the technology should also be transparent about what personal data is
stored, collected and retained by such systems and how it will be used for tasks like contact
tracing. However, despite the technologies’ limitations, we expect many organizations will feel that
some support for social distancing is better than none, and additionally some may find their lawyers
recommend them to reduce potential liability.

Business Impact: It’s easier to apply social distancing technologies in situations where the
organization, sometimes in cooperation with a union, can influence individuals and the equipment
they use, e.g., by providing smart badges or standard smartphones. Situations include factories,
warehouses and some offices. Effective application of social distancing technologies is much more
difficult when dealing with a wide range of individuals in the general population, e.g., customers at
retail outlets or in showrooms, or visitors to venues such as museums. Challenges in the latter area
include privacy, convincing individuals to adopt a solution, and supporting apps on a wide and
uncontrolled range of smartphones. Social distancing technology will be one part of a
multidimensional strategy that will include tactics such as behavioral guidelines, new working
practices and controlling the number of visitors to venues. Some of these solutions can be used for
additional use cases, like hand-washing compliance. In some situations, investment in social
distancing technology can also be part of a mitigation strategy against future litigation for not taking
proper care of employees and customers.

Benefit Rating: High

Page 54 of 65 Gartner, Inc. | G00450415


Market Penetration: Less than 1% of target audience

Maturity: Emerging

Sample Vendors: AiRISTA Flow; Apple; Estimote; Fujitsu America; Google; Kiana; Radiant RFID;
Samsung Electronics; Sonitor Technologies; Zebra

Recommended Reading: “Manage Social Distancing and Contact Tracing With Location-Aware
Technologies and Devices”

Explainable AI
Analysis By: Saniye Alaybeyi

Definition: AI researchers define “explainable AI” as an ensemble of methods that make black-box
AI algorithms’ outputs sufficiently understandable. Gartner’s definition of explainable AI is broader
— a set of capabilities that describes a model, highlights its strengths and weaknesses, predicts its
likely behavior, and identifies any potential biases. It can articulate the decisions of a descriptive,
predictive or prescriptive model to enable accuracy, fairness, accountability, stability and
transparency in algorithmic decision making.

Position and Adoption Speed Justification: Not every decision an AI model makes needs to be
explained. There is still considerable amount of discussion in sectors such as insurance or banking,
in which there are sometimes company level or even legislative restrictions that make it a must for
the models that these companies use to be explainable. In 2020 more vendors introduced improved
explainable AI capabilities that can help data scientists create an audit trail that starts from data
collection to model development and deployment. In 2020, explainable AI was less hyped
compared to 2019, and Gartner saw real and useful implementations of explainable AI. Therefore,
we decided to move explainable AI from prepeak 25% to postpeak 5% on the Hype Cycle.

User Advice:

■ Foster ongoing conversations with various line-of-business leaders, including legal and
compliance, to gain an understanding of the AI model’s interpretability requirements, challenges
and opportunities from each business unit. Integrate these findings into the development of the
enterprise information management strategy.
■ Build partnerships with IT, in particular with application leaders, to explain how the AI model fits
within the overall design and operation of the business solution, and to give stakeholders
visibility into training data.
■ Start with using AI to augment rather than replace human decision making. Having humans
make the ultimate decision avoids some complexity of explainable AI. Data biases may still be
questioned, but human-based decisions are likely to be more difficult to be challenged than
machine-only decisions.
■ Create data and algorithm policy review boards to track and perform periodic reviews of
machine learning algorithms and data being used. Continue to explain AI outputs within

Gartner, Inc. | G00450415 Page 55 of 65


changing security requirements, privacy needs, ethical values, societal expectations and
cultural norms..

Business Impact: End-user organizations may be able to utilize some future interpretability
capabilities from vendors to be able to explain their AI outputs. But eventually, AI explainability is
the end-user organization’s responsibility. End users know the business context their organizations
operate in, so they are better-positioned to explain their AI’s decisions and outputs in human-
understandable ways. The need for explainable AI has implications for how IT leaders operate, such
as consulting with the line of business, asking the right questions specific to the business domain,
and identifying transparency requirements for data sources and algorithms. The overarching goal is
that models need to conform to regulatory requirements and take into account any issues or
constraints that the line of business has highlighted. New policies around the inputs and boundary
conditions on the inputs into the AI subsystem, how anomalies are handled, how models are trained
and the frequency of training need to be incorporated into AI governance frameworks. Many
questions about the suitability of the AI model will rely on a clear understanding of the goals of the
application(s) being designed.

Benefit Rating: High

Market Penetration: 1% to 5% of target audience

Maturity: Emerging

Sample Vendors: H2O.ai; IBM; Microsoft; simMachines

Recommended Reading: “5 Myths About Explainable AI”

“Predicts 2020: Artificial Intelligence — the Road to Production”

“Build Trust With Business Users by Moving Toward Explainable AI”

Sliding Into the Trough

Carbon-Based Transistors
Analysis By: Gaurav Gupta

Definition: Carbon-based transistors replace silicon in traditional transistors and offer an alternative
solution for performance benefit as Si-based transistors reach practical limits. There are two
examples of C-based transistors; graphene and carbon nanotubes. Graphene is a one-atom thick
material of pure carbon, bonded together in a hexagonal honeycomb lattice. A carbon nanotube can
be thought of as a sheet of graphene rolled into a cylinder. The rolling-up direction of the graphene
layers determines the electrical properties of the nanotubes.

Position and Adoption Speed Justification: Graphene is a hard material to create, as arranging
carbon atoms in a two-dimensional hexagonal lattice at a fairly large scale is difficult. Material
quality can drastically decrease with just one defect. Graphene field-effect transistors (GFETs) take
the typical FET device and insert a graphene channel tens of microns in size between the source

Page 56 of 65 Gartner, Inc. | G00450415


and drain. Graphene transistors have high device sensitivity and superior conductivity. Another
issue is lack of band gap in graphene that makes it very hard to turn the current off once it starts
flowing, a major roadblock for logic operations, which require on-off switching. Researchers have
been working to find solutions to this problem, but compounded with a lack of fully integrated
supply chain, graphene is far from commercial application.

Carbon nanotubes (CNTs) with semiconductor properties offer the promise of small transistors with
high switching speeds in future semiconductor devices, while CNTs with metallic (conducting)
properties hold the promise of low electrical resistance that can be applied to the interconnections
within integrated circuits. Research indicates carbon nanotube FETs have properties that promise
around 10 times the energy efficiency and far greater speeds compared to silicon. CNTs can be
single or multiwalled depending on the number of graphene layers and as a result have different
strength and efficiency. Currently, there are mixed opinions whether or not CNT transistors would
maintain their impressive performance at extremely scaled lengths. But when fabricated at scale,
the transistors often come with many defects that affect performance, so they remain impractical.
Currently there is no technology for their mass fabrication and high production cost.

User Advice: Semiconductor opportunity will be available for next-generation transistors beyond 5
nm. C-based transistors have been moved forward in their position on the Hype Cycle toward the
Trough of Disillusionment as they are past their peak of expectation, and now researchers and
industry experts are facing reality. Target audiences that will require these semiconductors must
continue to work on fabrication at scale to resolve issues with mass production. Additionally,
alternative next-generation transistor solutions are evolving that can challenge their position.

Business Impact: There is potential for a huge impact, particularly when silicon devices reach their
minimum size limits — expected during the next five to 10 years. Wireless communications is an
area where these technologies will be really beneficial due to high current carrying capability in a
small area. An example of current commercial application includes Nantero’s NRAM, which
leverages carbon nanotube technology.

Benefit Rating: High

Market Penetration: Less than 1% of target audience

Maturity: Emerging

Sample Vendors: Fujitsu; Graphenea; imec; IBM; Intel; Nano-C; Samsung Electronics; TSMC

Recommended Reading: “Emerging Technology Analysis: Carbon Nanotubes Will Drive the Next
Generation of Semiconductor Devices”

“Emerging Technology Analysis: Carbon Nanotubes and Graphene Are Indispensable for Future
Electronic Products, So Act Now”

“Emerging Technology Analysis: Graphene Just May Be the Material of the Future”

Gartner, Inc. | G00450415 Page 57 of 65


“Emerging Technology Analysis: Graphene Is the Catalyst to Extend Semiconductor and Electronics
Innovation”

Bring Your Own Identity


Analysis By: David Mahdi; Felix Gaehtgens

Definition: Bring your own identity (BYOI) is the concept of allowing users to select and use an
external (third-party) digital identity, such as a social identity (Facebook, VK, WeChat, etc.) or a
higher-assurance identity (such as a bank identity, or a government eID) to assert their identity in
order to access multiple digital services. Service providers can be enabled to trust these external
digital IDs for purposes of authentication and access to digital services, but also for sharing of
identity attributes such as name and address.

Position and Adoption Speed Justification: BYOI consists of several mechanisms and
technologies, each with their own level of adoption and maturity. Social identities are well
established and have been the most commonly used type of digital ID with BYOI; however, this
mechanism comes with issues regarding privacy as the use of social identities leaves a digital
“bread crumb” (log of activity) with social media providers, and have a relatively low assurance of
identity, as many social media providers do not perform identity proofing when establishing user
credentials.

The year 2019-2020 has seen much progress in comparison to earlier years. The EU electronic
IDentification, Authentication and trust Services (eIDAS) regulation established minimum identity
assurance requirements and mandated interoperability by 28 September 2018. Organizations
delivering public digital services in an EU member state must now recognize electronic identification
from all EU member states, and EU Trusted Service Providers can enable users to sign legal
documents using digital signatures. In Canada, SecureKey launched Verified.Me, in addition to the
already established Concierge service. WeChat was chosen to deliver an electronic eID in
Guangzhou, China in late 2018, with expansion to other provinces in 2019.

Financial institutions also forged ahead with interoperable digital identities. In the Nordics,
partnerships between the government and financial institutions are already established. Capital
One’s Identity Services was launched in 2017 and expanded, including through acquisitions of
technology. Mastercard announced a consumer-centric model for digital identity in 2019. Other tech
titans such as Apple announced and launched “Sign in With Apple” in 2019, which leverages Apple
digital identities.

Furthermore, new and innovative approaches to decentralize identity also known as “blockchain
identity” and “self-sovereign identity” are spawning a lively mix of startups and industry consortia,
and large technology providers are also investing in this area.

User Advice: Recognize that the proliferation of siloed, noninteroperable digital identities will not
scale with the needs of digital business. Determine how to take value from, or in some cases,
contribute to, the BYOI landscape. Especially for B2C or G2C initiatives, there are some potential
risks that can arise from not leveraging BYOI such as:

Page 58 of 65 Gartner, Inc. | G00450415


■ Loss of customers: Carefully determine how the friction of using legacy approaches reduces
customer experience (CX) and thus customer retention. BYOI can alleviate this.
■ Honeypot for identity credentials and personal information: Mitigate risks here by enabling
customer and other users to rely on BYOI. However, by not considering BYOI, full responsibility
for identity and credential exposure remains with the enterprise.

Focus on reducing friction by leveraging common BYOI uses such as account registration and login.
Creating a great CX can offset risks of diluting the brand and the loss of ownership of the customer
journey.

Ensure the level of trust provided by the identity provider (IdP) matches the level of risk, or the
identity provider provides trust elevation to bridge any gap.

Determine the overall model of the approach to consumer access: Will you accept other methods
for BYOI (i.e., accept third-party identities)? Will you be a third-party IdP, that will offer identities for
consumption by other organizations?

Business Impact: BYOI offers the potential to leverage outside identities to help reduce friction and
to increase adoption, security and overall end-user satisfaction. Exploiting higher trust BYOI for
customer registration potentially avoids the cost of doing your own identity proofing, as can relying
on high assurance idPs to perform appropriate risk assessment and MFA at authentication, lowering
the barrier to new business models that require higher levels of identity assurance. This can cause
some transformational impacts on certain industries, especially in the era of digital business.

Many organizations have made significant investment in their IAM approach and to retain the
customer, and therefore have established themselves as custodians of digital identity. However, only
a small number of these organizations will likely be able to monetize their existing client basis by
becoming a third-party IdP. Key decision points that can motivate a move in this direction include:

■ Monetization of identity attributes


■ Brand loyalty
■ User demographics
■ Security and privacy concerns

Benefit Rating: Transformational

Market Penetration: 5% to 20% of target audience

Maturity: Early mainstream

Sample Vendors: Amazon; Apple; Evernym; Facebook; ForgeRock; Google; Microsoft; SecureKey;
Signicat; Twitter

Recommended Reading: “Innovation Insight for Bring Your Own Identity”

“Innovation Insight for Decentralized and Blockchain Identity”

Gartner, Inc. | G00450415 Page 59 of 65


“Cool Vendors in Blockchain Technology”

Ontologies and Graphs


Analysis By: Anthony Mullen

Definition: Ontologies and graphs enable users to model a set of concepts, categories, properties
and relationships in a particular domain. They support the development of a consistent terminology
and allow for complex relationships to be represented including part-whole relations, causation,
material constitution, plurality and unity. They are often used to abstract away from underlying
relational schemas and can be seen as a flexible knowledge network with broad use across many
NLT use cases. OWL and RDF are popular standards for ontology definitions.

Position and Adoption Speed Justification: Today the heavy burden of humans alone managing
ontologies is reduced by using ML to support their creation, maintenance and tuning. Workflows are
also maturing in this space to create human-in-the-loop designs supporting human experts and
users in the effort to develop and maintain them. Many semantic platforms have pivoted to integrate
symbolic (e.g., ontologies) and subsymbolic approaches (e.g., DNNs) over recent years which has
improved NLT performance.

Ontologies are often a component of broader hybrid AI systems and we see their use across the
following NLT markets: speech to text, insight engines, text mining, conversational systems and
natural language generation. While many end users will indirectly use vendor ontologies, few
develop and maintain their own. However, the proliferation of custom-made NLT use cases will spur
many end users to develop their own.

User Advice: As NLT proliferates in organizations there is an inevitable increase in inconsistencies


of terms and concepts across business units, partners and cross industry which ultimately hampers
systemic improvement. To counter information architecture problems many end users use
ontologies as a necessary abstraction away from service and technology platform relational
schemas.

Their use is very broad and applicable to numerous industries and problems. Examples of their
application include:

■ Product catalogues and discovery


■ Enterprise search
■ Marketing collateral development
■ Content management in media organizations
■ Cause and effect modelling in health populations
■ Representations of digital twins in manufacturing

End users should:

Page 60 of 65 Gartner, Inc. | G00450415


■ Check to see if any large scale ontologies are available for their industry or within their existing
applications.
■ Master entity, intents and relationship definitions for NLT projects (e.g., chatbots) with an
ontology making them reusable for other NLT projects.
■ Represent product catalogues and services as an ontology to enable richer collaborations
between processes and partners.
■ Use them to speed identification when there are multiple points to triangulate (faster than a
relational database search).
■ Capture and represent tacit and implicit knowledge from employees due to retire.
■ Support generation of reports (e.g., sales, quarterly) using NLG.
■ Hire librarians to complement the data science team to manage ontological models.
■ Consider ontology vendors and their wider offering, specifically how they relate ontologies
(definitions) to graphs (expressions of ontologies as data).

Vendors should seek to make their ontologies available as an asset, in a marketplace, rather than a
hidden mechanic for the end users they serve and to use them to expand data and service
partnerships in the NLT space.

Business Impact: As investment and dependence on NLT increase among consumers, enterprises
and vendors, we will see large scale ontologies become a foundational approach for concept and
relationship modelling by organizations. Ontologies are one major tool against the fragmentation
that multiple NLT projects and vendors bring. An important dimension to this technology is the ease
with which ontologies can be generated and maintained and this has improved with both accessible
UIs and machine learning as part of the workflow. Ontologies also represent an easy to use bridge
to external/linked data allowing organizations to improve their analytical and automation
capabilities.

Benefit Rating: High

Market Penetration: 5% to 20% of target audience

Maturity: Early mainstream

Sample Vendors: Expert System; Ontotext; PoolParty; Smartlogic; Synaptica; Taiger; Yactraq

Recommended Reading: “Hype Cycle for Data Management, 2019”

“Magic Quadrant for Insight Engines”

“Selecting the Optimal Technical Architecture for Data Ingestion”

“Data Fabrics Add Augmented Intelligence to Modernize Your Data Integration”

Gartner, Inc. | G00450415 Page 61 of 65


Appendixes
Figure 3. Hype Cycle for Emerging Technologies, 2019

Page 62 of 65 Gartner, Inc. | G00450415


Hype Cycle Phases, Benefit Ratings and Maturity Levels
Table 1. Hype Cycle Phases

Phase Definition

Innovation Trigger A breakthrough, public demonstration, product launch or other event generates significant
press and industry interest.

Peak of Inflated During this phase of overenthusiasm and unrealistic projections, a flurry of well-publicized
Expectations activity by technology leaders results in some successes, but more failures, as the
technology is pushed to its limits. The only enterprises making money are conference
organizers and magazine publishers.

Trough of Because the technology does not live up to its overinflated expectations, it rapidly becomes
Disillusionment unfashionable. Media interest wanes, except for a few cautionary tales.

Slope of Focused experimentation and solid hard work by an increasingly diverse range of
Enlightenment organizations lead to a true understanding of the technology’s applicability, risks and
benefits. Commercial off-the-shelf methodologies and tools ease the development process.

Plateau of Productivity The real-world benefits of the technology are demonstrated and accepted. Tools and
methodologies are increasingly stable as they enter their second and third generations.
Growing numbers of organizations feel comfortable with the reduced level of risk; the rapid
growth phase of adoption begins. Approximately 20% of the technology’s target audience
has adopted or is adopting the technology as it enters this phase.

Years to Mainstream The time required for the technology to reach the Plateau of Productivity.
Adoption

Source: Gartner (July 2020)

Table 2. Benefit Ratings

Benefit Rating Definition

Transformational Enables new ways of doing business across industries that will result in major shifts in industry
dynamics

High Enables new ways of performing horizontal or vertical processes that will result in significantly
increased revenue or cost savings for an enterprise

Moderate Provides incremental improvements to established processes that will result in increased revenue
or cost savings for an enterprise

Low Slightly improves processes (for example, improved user experience) that will be difficult to
translate into increased revenue or cost savings

Source: Gartner (July 2020)

Gartner, Inc. | G00450415 Page 63 of 65


Table 3. Maturity Levels

Maturity Level Status Products/Vendors

Embryonic ■ In labs ■ None

Emerging ■ Commercialization by vendors ■ First generation

■ Pilots and deployments by industry leaders ■ High price

■ Much customization

Adolescent ■ Maturing technology capabilities and process ■ Second generation


understanding
■ Less customization
■ Uptake beyond early adopters

Early mainstream ■ Proven technology ■ Third generation

■ Vendors, technology and adoption rapidly evolving ■ More out-of-box methodologies

Mature ■ Robust technology ■ Several dominant vendors


mainstream
■ Not much evolution in vendors or technology

Legacy ■ Not appropriate for new developments ■ Maintenance revenue focus

■ Cost of migration constrains replacement

Obsolete ■ Rarely used ■ Used/resale market only

Source: Gartner (July 2020)

Gartner Recommended Reading


Some documents may not be available as part of your current Gartner subscription.

Understanding Gartner’s Hype Cycles

Create Your Own Hype Cycle With Gartner’s Hype Cycle Builder

Toolkit: How to Build an Emerging Technology Radar

More on This Topic


This is part of an in-depth collection of research. See the collection:

■ 2020 Hype Cycle Special Report: Innovation as Strategy

Page 64 of 65 Gartner, Inc. | G00450415


GARTNER HEADQUARTERS

Corporate Headquarters
56 Top Gallant Road
Stamford, CT 06902-7700
USA
+1 203 964 0096

Regional Headquarters
AUSTRALIA
BRAZIL
JAPAN
UNITED KINGDOM

For a complete list of worldwide locations,


visit http://www.gartner.com/technology/about.jsp

© 2020 Gartner, Inc. and/or its affiliates. All rights reserved. Gartner is a registered trademark of Gartner, Inc. and its affiliates. This
publication may not be reproduced or distributed in any form without Gartner's prior written permission. It consists of the opinions of
Gartner's research organization, which should not be construed as statements of fact. While the information contained in this publication
has been obtained from sources believed to be reliable, Gartner disclaims all warranties as to the accuracy, completeness or adequacy of
such information. Although Gartner research may address legal and financial issues, Gartner does not provide legal or investment advice
and its research should not be construed or used as such. Your access and use of this publication are governed by Gartner Usage Policy.
Gartner prides itself on its reputation for independence and objectivity. Its research is produced independently by its research
organization without input or influence from any third party. For further information, see "Guiding Principles on Independence and
Objectivity."

Gartner, Inc. | G00450415 Page 65 of 65

You might also like