Hype Cycle For Artif 791179 NDX
Hype Cycle For Artif 791179 NDX
Hype Cycle For Artif 791179 NDX
Additional Perspectives
Analysis
What You Need to Know
Generative AI has had an impact like no other technology in the past decade. The
increased productivity for developers and knowledge workers, using systems like
ChatGPT, is very real and has caused organizations and industries to rethink their
business processes and the value of human resources.
In turn, the apparent abilities of generative AI systems have rekindled debates on the safe
usage of AI and whether artificial general intelligence can be achieved, or has even already
arrived. Current generative AI techniques are fallible, however, and many of the
innovations on this year’s Hype Cycle need to be put together in order to go beyond the
limitations and mitigate the risks.
Data and analytics (D&A) leaders must leverage this research to prepare their AI strategy
for the future and utilize technologies that offer high impact in the present.
AI-related innovations that have moved past the peak and are entering the Trough of
Disillusionment include synthetic data, edge AI, ModelOps and knowledge graphs.
Knowledge graphs are the second biggest mover on the Hype Cycle and have been touted
as the solution to many of the problems with generative AI techniques, but still require
some work to become a mainstream technology.
Intelligent applications, cloud AI services, data labeling and annotation, and computer
vision are moving toward the Plateau of Productivity, all also expedited by generative AI
advancements.
To summarize, Gartner sees two sides to the generative AI movement on the path toward
more powerful AI systems:
■ Autonomic systems
■ AI engineering
■ Data-centric AI
■ Composite AI
■ Operational AI systems
■ AGI
■ Prompt Engineering
■ Smart Robots
■ ModelOps
■ Synthetic data
■ Intelligent applications
■ Cloud AI services
■ Computer vision
■ First-principles AI
■ Neuro-symbolic AI
■ Multiagent systems
■ Causal AI
■ AI simulation
■ AI TRiSM
■ Responsible AI
■ Foundation models
■ Knowledge graphs
Those innovations that deserve particular attention within the two- to five-year period to
mainstream adoption include generative AI and decision intelligence. Early adoption of
these innovations will lead to significant competitive advantage and ease the problems
associated with utilizing AI models within business processes.
Several innovations have a five- to 10-year period to mainstream adoption, and from
these, responsible AI and foundational models should already be applied with small-scale
projects to deliver immediate impact.
D&A leaders should balance the strategic exploration of high-value propositions with
those that do not require extensive engineering or data science proficiency, and that have
been commoditized as stand-alone applications and within packaged business solutions.
These innovations include computer vision, knowledge graphs, smart robots, intelligent
applications and AI cloud services.
■ Natural language processing: This has been broken down into several technologies,
covered as part of Hype Cycle for Natural Language Technologies, 2023.
■ Digital ethics: This has been subsumed into responsible AI, which appears on this
Hype Cycle.
■ Deep learning: The AI Hype Cycle provides a view of technologies at a high level.
Specific machine learning techniques are covered as part of Hype Cycle for Data
Science and Machine Learning, 2023.
Maturity: Embryonic
Definition:
Autonomic systems are emerging as an important trend as they enable levels of business
adaptability, flexibility and agility that can’t be achieved with traditional AI techniques
alone. Their flexibility is valuable in situations where the operating environment is
unknown or unpredictable, and real-time monitoring and control aren’t practical. Their
learning ability is valuable in situations where a task can be learned even though there is
no well-understood algorithm to implement it.
Business Impact
■ We cannot program the exact learning algorithm, but the task is continuously
learnable.
Drivers
■ Automated systems are a very mature concept. They perform well-defined tasks and
have fixed deterministic behavior (e.g., an assembly robot welding cars). The
Increasing number of use cases around automation using AI techniques is a strong
base for autonomous systems.
■ Nondeterminism: Systems that continuously learn and adapt their behavior aren’t
predictable. This will pose challenges for employees and customers who may not
understand how and why a system performed as it did.
■ Immaturity: Skills in the area will be lacking until autonomics becomes more
mainstream. New types of professional services may be required.
■ Digital ethics and safety: Autonomic systems will require architectures and
guardrails to prevent them from learning undesirable, dangerous, unethical or even
illegal behavior when no human is validating the system.
■ Legal liability: It may be difficult for the supplier of an autonomic system to take
total responsibility for its behavior because that will depend on the goals it has set,
its operating conditions and what it learned.
User Recommendations
■ Manage risk in autonomic system deployments by analyzing the business, legal and
ethical consequences of deploying autonomic systems — which are partially
nondeterministic. Do so by creating a multidisciplinary task force.
Sample Vendors
Maturity: Emerging
Definition:
As AI expands in engineering and scientific use cases, it needs a stronger ability to model
problems and better represent their context. Digital-only AI solutions cannot generalize
well enough beyond training, limiting their adaptability. FPAI instills a more reliable
representation of the context and the physical reality, yielding more adaptive systems. A
better ability to abstract leads to reduced training time, improved data efficiency, better
generalization and greater physical consistency.
Business Impact
■ FPAI approaches instill a more flexible representation of the context and conditions
in which systems operate, allowing software developers to build more adaptive
systems. Traditional business modeling approaches have been brittle. This is
because the digital building blocks making up solutions cannot generalize well
enough beyond their initial training data, therefore limiting the adaptability of those
solutions.
■ Complex systems like climate models, large-scale digital twins and complex health
science problems are particularly challenging to model. Composite AI approaches
provide more concrete answers and manageable solutions to these problems, but
their engineering remains a significant challenge. FPAI provides more immediate
answers to these problems.
■ The need for more robust and adaptable business simulation systems will also
promote the adoption of FPAI approaches. With a better range of context
modelization and more accurate knowledge representation techniques, simulations
will be more reliable and account for a wider range of possible scenarios — all better
anchored in reality.
■ Computationally, the scaling of the training, testing and deployment of complex FPAI
models on large datasets in an efficient manner will also be an issue.
■ Brute force approach is prevalent in AI, and is easy to implement for data scientists,
while first principles require additional fundamental knowledge of a subject, calling
for a multidisciplinary team.
User Recommendations
■ Enforce standards for testing accuracy and physical consistency for physics and
first-principles-based models of the relevant domain, while characterizing sources of
uncertainty.
■ Promote model-consistent training for FPAI models and train models with data
characteristics representative of the downstream application, such as noise, sparsity
and incompleteness.
Sample Vendors
Predicts 2023: Simulation Combined With Advanced AI Techniques Will Drive Future AI
Investments
Multiagent Systems
Analysis By: Leinar Ramos, Anthony Mullen, Pieter den Hamer
Maturity: Embryonic
Definition:
Current AI is focused on the creation of individual agents built for specific use cases,
limiting the potential business value of AI to simpler problems that can be solved by
single monolithic models. The combined application of multiple autonomous agents can
tackle complex tasks that individual agents cannot, while creating more adaptable,
scalable and robust solutions. It is also able to succeed in environments where
decentralized decision making is required.
Business Impact
■ Generative AI: Orchestrating large language model agents for complex tasks
■ Supply Chain: Optimizing scheduling, planning, routing, traffic signal control and
supply chain optimization
Drivers
■ Generative AI: Large language models (LLMs) are increasingly augmented with
additional capabilities, such as tools and internal memory, to make them better
agents. Assembling and combining different LLM-based agents is increasing the
interest in multiagent systems.
■ Training complexity: Multiagent systems are typically harder to train and build than
individual agents. These systems can exhibit emergent behavior that is hard to
predict in advance, which increases the need for robust training and testing. There
might be, for example, conflicting objectives and interactions between agents that
create undesirable behavior.
■ Limited adoption and readiness: Despite its benefits, the application of multiagent
systems to real-world problems is not yet widespread, which creates a lack of
enterprise awareness and readiness to implement. Business partners might struggle
to understand why a multiagent simulation is required.
User Recommendations
■ Use multiagent systems for complex problems that require decentralized decision
making and cannot be solved by single-agent AI models. This includes problems
with changing environments where agents need to adapt and problems where a
diverse set of agents with different expertise can be combined to accomplish a goal.
■ Educate your data science teams on multiagent systems, how this differs from
single-agent AI design, and what are some of the available techniques to train and
build these systems, such as reinforcement learning.
Sample Vendors
Neuro-Symbolic AI
Analysis By: Erick Brethenoux, Afraz Jaffri
Maturity: Embryonic
Definition:
Business Impact
■ Limitations of AI models that rely purely on machine learning techniques that focus
on correlation over understanding and reasoning. The newest generation of LLMs is
well-known for its tendency to give factually incorrect answers or produce
unexpected results.
■ The need for explanation and interpretability of AI outputs that are especially
important in the regulated industry use cases and in systems that use private data.
■ The need to move toward semantics over syntax in systems that deal with real-world
entities in order to ground meaning to words and terms in specific domains.
■ The set of tools available to combine different types of AI models is increasing and
becoming easier to use for developers, data scientists and end users. The dominant
approach is to chain together results from different models (composite AI) rather
than using single models that are neuro-symbolic in nature.
Obstacles
■ The commercial and investment trajectory for AI startups allocates almost all capital
to deep learning approaches leaving only those willing to bet on the future to invest
in neuro-symbolic AI development.
■ Popular media and academic conferences do not give as much exposure to the
neuro-symbolic AI movement as compared to other approaches, for now.
■ Invest in data architecture that can leverage the building blocks for neuro-symbolic
AI techniques such as knowledge graphs and agent-based techniques.
Sample Vendors
Predicts 2023: Simulation Combined With Advanced AI Techniques Will Drive Future AI
Investments
AI Engineering
Analysis By: Kevin Gabbard, Soyeb Barot
Maturity: Emerging
Definition:
The potential value of AI has led to huge demand to rapidly launch market-ready AI
solutions. This is a big engineering challenge. Most enterprises still struggle to move
individual pilots to production, much less operate portfolios of AI solutions at scale.
Establishing consistent AI pipelines enables enterprises to develop, deploy, adapt and
maintain AI models (statistical, machine learning, generative, deep learning, graph,
linguistic and rule-based) consistently, regardless of environment.
Business Impact
Drivers
■ DataOps, ModelOps and DevOps provide best practices for moving artifacts through
the AI development life cycle. Standardization across data and model pipelines
accelerates the delivery of AI solutions.
User Recommendations
■ Use point solutions sparingly and only to plug feature/capability gaps in fully
featured DataOps, MLOps, ModelOps and PlatformOps tools.
■ Upskill data engineering and platform engineering teams to adopt tools and
processes that drive continuous integration/continuous development for AI artifacts.
Amazon Web Services; Dataiku; DataRobot; Domino Data Lab; Google; HPE; IBM; Iguazio;
Microsoft
Demystifying XOps: DataOps, MLOps, ModelOps, AIOps and Platform Ops for AI
AI Simulation
Analysis By: Leinar Ramos, Anthony Mullen, Pieter den Hamer, Jim Hare
Maturity: Emerging
Definition:
Increased complexity in decision making is driving demand for both AI and simulation.
However, current AI faces challenges, as it is brittle to change and requires a lot of data.
Conversely, realistic simulations can be expensive and difficult to build and run. To
resolve these challenges, a growing approach is to combine AI and simulation: Simulation
is used to make AI more robust and compensate for a lack of training data, and AI is used
to make simulations more efficient and realistic.
■ Increased AI value by broadening its use to cases where data is scarce, using
simulation to generate synthetic data (for example, robotics and self-driving cars)
■ Greater efficiency by leveraging AI to decrease the time and cost to create and use
complex and realistic simulations
■ Limited availability of AI training data is increasing the need for synthetic data
techniques, such as simulation. Simulation is uniquely positioned among synthetic
data alternatives in its ability to generate diverse datasets that are not constrained
by a fixed “seed” dataset to generate synthetic data from.
■ Increased technical debt in AI is driving the need for the reusable environments
that simulation provides. Current AI focuses on building short-lived AI models with
limited reuse, accumulating technical debt. Organizations will increasingly deploy
hundreds of AI models, which requires a shift in focus toward building persistent,
reusable environments where many AI models can be trained, customized and
validated. Simulation environments are ideal since they are reusable, scalable, and
enable the training of many AI models at once.
■ Gap between simulation and reality: Simulations can only emulate — not fully
replicate — real-world systems. This gap will reduce as simulation capabilities
improve, but it will remain a key factor. Given this gap, AI models trained in
simulation might not have the same performance once they are deployed:
differences in the simulation training dataset versus real-world data can impact
models’ accuracy.
■ Fragmented vendor market: The AI and simulation markets are fragmented, with
few vendors offering combined AI simulation solutions, potentially slowing down the
deployment of this capability.
User Recommendations
■ Create synergies between AI and simulation teams, projects and solutions to enable
a next generation of more adaptive solutions for ever-more complex use cases.
Incrementally build a common foundation of more generalized and complementary
models that are reused across different use cases, business circumstances and
ecosystems.
■ Prepare for the combined use of AI, simulation and other relevant techniques, such
as graphs, natural language processing or geospatial analytics, by prioritizing
vendors that offer platforms that integrate different AI techniques (composite AI), as
well as simulation.
Altair; Ansys; Cosmo Tech; Epic Games; MathWorks; Microsoft; NVIDIA; Rockwell
Automation; The AnyLogic Company; Unity
Predicts 2023: Simulation Combined With Advanced AI Techniques Will Drive Future AI
Investments
Maturity: Embryonic
Definition:
As AI becomes more sophisticated and powerful, with recent great advances in generative
AI in particular, a growing group of people see AGI as no longer purely hypothetical.
Improving our understanding of at least the concept of AGI is critical for steering and
regulating AI’s further evolution. It is also important to manage realistic expectations and
to avoid prematurely anthropomorphizing AI. However, should AGI become real, its impact
on the economy, (geo)politics, culture and society cannot be underestimated.
In the short term, organizations must know that hype about AGI exists today among many
stakeholders, stoking fears and unrealistic expectations about current AI’s true
capabilities. This AGI anticipation is already accelerating the emergence of more AI
regulations and affects people’s trust and willingness to apply AI today. In the long term,
AI continues to grow in power and, with or without AGI, will increasingly impact
organizations, including the advent of machine customers and autonomous business.
Drivers
■ Vendors such as Google, IBM, NNAISENSE, OpenAI and Vicarious are actively
researching the field of AGI.
■ Humans’ innate desire to set lofty goals is also a major driver for AGI. At one point in
history, humans wanted to fly by mimicking bird flight. Today, airplane travel is a
reality. The inquisitiveness of the human mind, taking inspiration from nature and
from itself, is not going to fizzle out.
■ There is little scientific consensus about what “intelligence” and related terminology
like “understanding” actually mean, let alone how AGI should be exactly defined and
interpreted. Flamboyant representations of AGI in science fiction create a disconnect
from reality. Scientific understanding about human intelligence is still challenged by
the enormous complexity of the human brain and mind. Several breakthrough
discoveries are still needed before human intelligence is properly understood at last.
This in turn is foundational to the “design” or at least validation of AGI, even when
AGI will emerge in a nonhuman, nonbrainlike form. Moreover, once AGI is understood
and designed, further technological innovations will likely be needed to actually
implement AGI. For these reasons, strong AI is unlikely to emerge in the near future.
This may be sooner if one would settle for a more narrow, watered-down version of
AGI in which AI is able to perform not all but only a few tasks at the same level as
humans. This would no longer really be AGI as defined here.
■ Today, people may be either overly concerned about future AI replacing humanity or
overly excited about current AI’s capabilities and impact on business. Both cases will
hamper a realistic and effective approach toward using AI today. To mitigate this
risk, engage with stakeholders to address their concerns and create or maintain
realistic expectations.
■ Stay apprised of scientific and innovative breakthroughs that may indicate the
possible emergence of AGI. Meanwhile, keep applying current AI to learn, reap its
benefits and develop practices for its responsible use.
■ Although AGI is not a reality now, current AI already poses significant risks regarding
bias, reliability and other areas. Adopt emerging AI regulations and promote internal
AI governance to manage current and emerging future risks of AI.
Sample Vendors
AGI Innovations; Google; IBM; Kimera Systems; Microsoft; New Sapience; NNAISENSE;
OpenAI; Vicarious
Causal AI
Analysis By: Pieter den Hamer, Leinar Ramos, Ben Yan
Maturity: Emerging
AI’s ultimate value comes from helping people take better actions. Machine learning (ML)
makes predictions based on statistical relationships (correlations), regardless of whether
these are causal. This approach is fine for prediction, but predicting an outcome is not the
same as understanding what causes it and how to improve it. Causal AI is crucial when
we need to be more prescriptive to determine the best actions to influence specific
outcomes. Causal AI techniques help make AI more autonomous, explainable, robust and
efficient.
Business Impact
■ The ability to extract causal knowledge with less costly and time-consuming
experiments
■ Limited data availability for certain use cases is pushing organizations toward
more data-efficient techniques like causal AI. Causal AI leverages human domain
knowledge of cause-and-effect relationships to bootstrap AI models in small-data
situations.
■ The growing complexity of use cases and environments where AI is applied requires
more robust AI techniques. Causal structure changes much more slowly than
statistical correlations, making causal AI more robust and adaptable in fast-
changing environments. The volatility of the last few years has exposed the
brittleness of correlation-based AI models across industries. These models have
struggled to adapt because they were trained under a very different context.
■ The need for greater AI trust and explainability is driving interest in models that are
more intuitive to humans. Causal AI techniques, such as causal graphs, make it
possible to be explicit about causes and explain models in terms that humans
understand.
■ The next step in AI requires causal AI. Current deep learning models and, in
particular, generative AI have limitations in terms of their reliability and ability to
reason. A composite AI approach that complements generative AI with causal AI —
in particular, causal knowledge graphs — offers a promising avenue to bring AI to a
higher level.
■ Causality is not trivial. Not every phenomenon is easy to model in terms of its
causes and effects. Causality might be unknown, regardless of AI use.
■ The quality of a causal AI model depends on its causal assumptions and on the
data used to build it. This data is susceptible to bias and imbalance. Just because a
model is causal doesn’t mean that it will outperform correlation-based ones.
■ The vendor landscape is nascent, and enterprise adoption is currently low. Clearly,
this represents a challenge when organizations are running initial causal AI pilots
and identifying specific use cases where causal AI is most relevant.
User Recommendations
■ Use causal AI when you require more augmentation and automation in decision
intelligence — i.e., when AI is needed not only to generate predictions, but also to
understand how to affect the predicted outcomes. Examples include customer
retention programs, marketing campaign allocation and financial portfolio
optimization, as well as smart robotics and autonomous systems.
■ Select different causal AI techniques depending on the complexity of the specific use
case. These include causal rules, causal graphs and Bayesian networks, simulation,
and ML for causal learning.
■ Educate your data science teams on causal AI. Explain the difference between
causal and correlation-based AI, and cover the range of techniques available to
incorporate causality.
Actable AI; causaLens; Causality Link; CML Insight; Geminos Software; IBM; Lucid.AI;
Qualcomm; SCALNYX; Xplain Data
Case Study: Causal AI to Maximize the Efficiency of Business Investments (HDFC Bank)
Decision Intelligence
Analysis By: Erick Brethenoux
Maturity: Emerging
Definition:
The current hype around automated decision making and augmented intelligence, fueled
by AI techniques in decision making (including generative AI), is pushing DI toward the
Peak of Inflated Expectations. Recent crises have revealed the brittleness of business
processes. Reengineering those processes to be resilient, adaptable and flexible will
require the discipline brought by DI methods and techniques. A fast-emerging market (DI
platforms) is starting to provide resilient solutions for decision makers.
■ Reduce technical debt and increase visibility. It improves the impact of business
processes by materially enhancing the sustainability of organizations’ decision
models based on the power of their relevance and the quality of their transparency,
making decisions more transparent and auditable.
■ The need to curtail unstructured, ad hoc decisions that are siloed and disjointed.
Often uncoordinated, such decisions promote local optimizations at the expense of
global efficiency. This phenomenon happens from both an IT and a business
perspective.
■ Tighter regulations that are making risk management more prevalent. From privacy
and ethical guidelines to new laws and government mandates, it is becoming
difficult for organizations to fully understand the risk impacts of their decisions. DI
enables an explicit representation of decision models, reducing this risk.
■ Generative AI. The advent of generative AI is accelerating the research and adoption
of composite AI models, which are the foundation of DIPs.
Reengineer Your Decision-Making Processes for More Relevant, Transparent and Resilient
Outcomes
Composite AI
Analysis By: Erick Brethenoux, Pieter den Hamer
Definition:
Business Impact
Composite AI offers two main benefits. First, it brings the power of AI to a broader group
of organizations that do not have access to large amounts of historical or labeled data
but possess significant human expertise. Second, it helps to expand the scope and quality
of AI applications (that is, more types of reasoning challenges can be embedded). Other
benefits, depending on the techniques applied, include better interpretability and resilience
and the support of augmented intelligence.
■ ML-based AI techniques lead to insights that inform actions. Additionally, the most
appropriate actions can be further determined by combinations of rule-based and
optimization models — a combination often referred to as prescriptive analytics.
■ Lack of awareness and skills in leveraging multiple AI methods. This could prevent
organizations from considering the techniques particularly suited to solving specific
problem types.
■ Trust and risk barriers. The AI engineering discipline is also starting to take shape,
but only mature organizations have started to apply its benefits in operationalizing
AI techniques. Security, ethical model behaviors, observability, model autonomy and
change management practices will have to be addressed across the combined AI
techniques.
User Recommendations
■ Capture domain knowledge and human expertise to provide context for data-driven
insights by applying decision management with business rules and knowledge
graphs, in conjunction with ML and/or causal models.
■ Combine the power of ML, image recognition or natural language processing with
graph analytics to add higher-level, symbolic and relational intelligence.
Sample Vendors
ACTICO; Aera Technology; FICO; Frontline Systems; IBM; Indico Data; Peak; SAS
How to Use Machine Learning, Business Rules and Optimization in Decision Management
Data-Centric AI
Analysis By: Svetlana Sicular
Maturity: Embryonic
Definition:
Organizations that invest in AI at scale will shake up their data management practices
and capabilities to preserve the evergreen classical ideas and extend them to AI in two
ways:
Drivers
■ Models, especially for generative AI, increasingly come from the vendors, rather
than being delivered in-house. Data is becoming the main means for enterprises to
get value from these pretrained models.
■ Even though the data side of AI reflects understanding of the problem, it is less
exciting. It includes tasks such as preparing datasets and developing a clear
understanding of why the data was collected a certain way, what the data means
and what biases exist in the data.
■ Responsible AI is necessary to ask all the right questions about the data and the
solution. These are AI-specific data practices that many enterprises want to solve
through tooling, rather than governance.
■ Data management activities don’t end once the model has been developed.
Deployment considerations and ongoing drift monitoring require dedicated data
management activities and practices.
User Recommendations
■ Enforce policies on data fitness for AI. Define and measure minimum data
standards (such as formats, tools and metrics) for AI early on, to prevent
reconciliation of multiple data approaches when taking AI to scale.
Databricks; Explorium; Landing AI; Mobilewalla; MOSTLY AI; Pinecone Systems; Protopia
AI; Scale AI; Snorkel AI; YData
Overcoming Data Quality Risks When Using Semistructured and Unstructured Data for
AI/ML Models
Operational AI Systems
Analysis By: Chirag Dekate, Soyeb Barot, Sumit Agarwal
Maturity: Emerging
Definition:
Business Impact
■ It helps align and automate the data, AI model deployment and governance
pipelines.
■ Operationalization and automation platforms are a core part of how early enterprise
AI pioneers scale productization of AI by leveraging existing data, analytics and
governance frameworks.
■ The enterprise OAISys enables unification of two core contexts: deployment context
across hybrid, multicloud, edge AI and IoT, and operational context across batch and
streaming processing modes that commonly occur as enterprises train and deploy
production models.
■ Enterprises with low data and AI maturity levels will find OAISys intimidating to build,
deliver and support.
■ OAISys requires integration of full-featured solutions with select tools that address
portfolio gaps with minimal overlap. These include capability gaps around feature
stores, model stores, governance capabilities and more.
■ OAISys requires a high degree of cloud maturity, or the ability to integrate data and
model pipelines across deployment contexts. The potential complexity and costs
may be a deterrent for organizations just starting their AI initiatives.
■ Enterprises seeking to deliver OAISys often seek “unicorn” experts and service
providers to productize AI. Fully featured vendor solutions that enable OAISys are
hard to come by, and enterprises often have to build and support these environments
on their own.
User Recommendations
■ Rationalize data and analytic environment and leverage current (simplified subset
of) investments in data management, DSML, ModelOps and MLOps tools to build
OAISys.
■ Avoid building patchwork OAISys that integrate piecemeal functionality from scratch
(and add another layer of tool sprawl). Utilize point solutions sparingly and
surgically to plug feature/capability gaps in fully featured DataOps, MLOps and
ModelOps tools.
■ Actively leverage your existing data management, DSML, MLOps and ModelOps
platforms as building blocks, rather than starting from scratch.
Sample Vendors
Amazon Web Services; Dataiku; DataRobot; Domino Data Lab; Google; HPE Ezmeral
Software; IBM; Iguazio; Microsoft; ModelOp
Maturity: Adolescent
Definition:
AI trust, risk and security management (AI TRiSM) ensures AI model governance,
trustworthiness, fairness, reliability, robustness, efficacy and data protection. AI TRiSM
includes solutions and techniques for model interpretability and explainability, data and
content anomaly detection, AI data protection, model operations and adversarial attack
resistance.
Business Impact
Drivers
■ AI risk and security management imposes new operational requirements that are not
fully understood and cannot be addressed by existing systems. New vendors are
filling this gap.
■ Detecting and stopping adversarial attacks on AI requires new methods that most
enterprise security systems do not offer.
■ Regulations for AI risk management — such as the EU AI Act and other regulatory
frameworks in North America, China and India — are driving businesses to institute
measures for managing AI model application risk. Such regulations define new
compliance requirements organizations will have to meet on top of existing ones,
like those pertaining to privacy protection.
Obstacles
■ AI TRiSM is often an afterthought. Organizations generally don’t consider it until
models or applications are in production.
■ Enterprises interfacing with hosted, large language models (LLMs) are missing
native capabilities to automatically filter inputs and outputs — for example,
confidential data policy violations or inaccurate information used for decision
making. Also, enterprises must rely on vendor licensing agreements to ensure their
confidential data remains private in the host environment.
■ Most AI threats are not fully understood and not effectively addressed.
■ Although challenging, the integration of life cycle controls can be done with AI
TRiSM.
User Recommendations
■ Set up an organizational task force or dedicated unit to manage your AI TRiSM
efforts. Include members who have a vested interest in your organization’s AI
projects.
■ Avoid, to the extent possible, black-box models that stakeholders do not understand.
■ Implement solutions that protect data used by AI models. Prepare to use different
methods for different use cases and components.
■ Use enterprise-policy-driven content filtering for inputs and outputs to and from
hosted models, such as LLMs.
Sample Vendors
AIShield; Arize AI; Arthur; Fiddler; ModelOp; Modzy; MOSTLY AI; Protopia AI; SolasAI; TrojAI
Prompt Engineering
Analysis By: Frances Karamouzis, Afraz Jaffri, Jim Hare, Arun Chandrasekaran, Van Baker
Maturity: Emerging
Definition:
Prompt engineering is the discipline of providing inputs, in the form of text or images, to
generative AI models to specify and confine the set of responses the model can produce.
The inputs prompt a set that produces a desired outcome without updating the actual
weights of the model (as done with fine-tuning). Prompt engineering is also referred to as
“in-context learning,” where examples are provided to further guide the model.
Prompt engineering is the linchpin to business alignment for desired outcomes. Prompt
engineering is important because large language models (LLMs) and generative AI
models in general are extremely sensitive to nuances and small variations in input. A
slight tweak can change an incorrect answer to one that is usable as an output. Each
model has its own sensitivity level, and the discipline of prompt engineering is to uncover
the sensitivity through iterative testing and evaluation.
Business Impact
■ Business alignment: It allows subject data scientists, subject matter experts and
software engineers to steer foundation models, which are general-purpose in nature,
to align to the business, domain and industry.
■ Balance and efficiency: The fundamental driver for prompt engineering is it allows
organizations to strike a balance between consuming an “as is” offering versus
pursuing a more expensive and time-consuming approach of fine-tuning. Generative
AI models, and in particular LLMs, are pretrained, so the data that enterprises want to
use with these models cannot be added to the training set. Instead, prompts can be
used to feed content to the model with an instruction to carry out a function.
■ Role alignment: Data scientists are critical to understanding the capabilities and
limits of models, and to determine whether to pursue a purely prompt-based or fine-
tuning-based approach (or combination of approaches) for customization. The
ultimate goal is to use machine learning itself to generate the best prompts and
achieve automated prompt optimization. This is in contrast to an end user of an
LLM who concentrates on prompt design to manually alter prompts to give better
responses.
■ Risk: Beyond the early stages of awareness and understanding, the biggest obstacle
may be that prompt engineering is focused on verification, validation, improvement
and refinement; however, it’s not without risk. Prompt engineering is not the panacea
to all of the challenges. It helps to manage risk, not remove it completely. Errors may
still occur, and potential liability is at stake.
■ Build critical skills across a number of different team members that will
synergistically contribute critical elements. For example, there are important roles for
data scientists, business users, domain experts, software engineers and citizen
developers.
■ Communicate and cascade the message that prompt engineering is not foolproof.
Rigor and diligence need to permeate and work across all the enterprise teams to
ensure successful solutions.
Sample Vendors
Quick Answer: How Will Prompt Engineering Impact the Work of Data Scientists?
Neuromorphic Computing
Analysis By: Alan Priestley
Maturity: Embryonic
Business Impact
■ Today’s deep neural network (DNN) algorithms require the use of high-performance
processing devices and vast amounts of data to train these systems, limiting scope
of deployment.
■ Semiconductor vendors are developing chips that utilize SNNs to implement AI-
based solutions.
■ Neuromorphic systems can be trained using smaller datasets than DNNs, with the
potential of in situ training.
Obstacles
■ Accessibility: GPUs are more accessible and easier to program than neuromorphic
computing. However, this could change when neuromorphic computing and the
supporting ecosystems mature.
■ Create a roadmap plan by identifying key applications that could benefit from
neuromorphic computing.
Sample Vendors
AnotherBrain; Applied Brain Research; BrainChip; GrAi Matter Labs; Intel; Natural
Intelligence; SynSense
Responsible AI
Analysis By: Svetlana Sicular
Maturity: Adolescent
Responsible AI has emerged as the key AI topic for Gartner clients. When AI replaces
human decisions and generates brand-new artifacts, it amplifies both good and bad
outcomes. Responsible AI enables the right outcomes by ensuring business value while
mitigating risks. This requires a set of tools and approaches, including industry-specific
methods, adopted by vendors and enterprises. More jurisdictions introduce new
regulations that challenge organizations to respond in meaningful ways.
Business Impact
■ Organizational driver assumes that AI’s business value versus risk in regulatory,
business and ethical constraints should be balanced, including employee reskilling
and intellectual property protection.
■ Societal driver includes resolving AI safety for societal well-being versus limiting
human freedoms. Existing and pending legal guidelines and regulations, such as the
EU’s Artificial Intelligence Act, make responsible AI a necessity.
■ AI affects all ways of life and touches all societal strata; hence, the responsible AI
challenges are multifaceted and cannot be easily generalized. New problems
constantly arise with rapidly evolving technologies and their uses, such as using
OpenAI’s ChatGPT or detecting deepfakes. Most organizations combine some of the
drivers under the umbrella of responsible AI, namely, accountability, diversity, ethics,
explainability, fairness, human centricity, operational responsibility, privacy,
regulatory compliance, risk management, safety, transparency and trustworthiness.
■ Poorly defined accountability for responsible AI makes it look good on paper but is
ineffective in reality.
User Recommendations
■ Publicize consistent approaches across all focus areas. The most typical areas of
responsible AI in the enterprise are fairness, bias mitigation, ethics, risk
management, privacy, sustainability and regulatory compliance.
■ Define model design and exploitation principles. Address responsible AI in all phases
of model development and implementation cycles. Go for hard trade-off questions.
Provide responsible AI training to personnel.
■ Participate in industry or societal AI groups. Learn best practices and contribute your
own, because everybody will benefit from this. Ensure policies account for the needs
of any internal or external stakeholders.
Sample Vendors
Amazon; Arthur; Fiddler; Google; H2O.ai; IBM; Microsoft; Responsible AI Institute; TAZI.AI;
TruEra
Expert Insight Video: What Is Responsible AI and Why Should You Care About It?
Smart Robots
Analysis By: Annette Jump
Maturity: Emerging
Definition:
Smart robotics is an AI use case, while robotics in general does not imply AI. Smart
(physical) robots had less adoption compared with industrial counterparts but received
great hype in the marketplace; therefore, smart robots are still climbing the Peak of
Inflated Expectations. There has been an increased interest in smart robots in the last 12
months, as companies are looking to further improve logistic operations, support
automation and augment humans in various jobs.
Smart robots will make their initial business impact across a wide spectrum of asset-,
product- and service-centric industries. Their ability to reduce physical risk to humans, as
well as do work with greater reliability, lower costs and higher productivity, is common
across these industries. Smart robots are already being deployed among humans to work
in logistics, warehousing, police as well as safety applications.
Drivers
■ The market is becoming more dynamic with technical developments of the last two
years, enabling a host of new use cases that have changed how smart robots are
perceived and how they can deliver value.
■ The physical building blocks of smart robots (motors, actuators, chassis and
wheels) have incrementally improved over time. However, areas such as Internet of
Things (IoT) integration, edge AI and conversational capabilities have seen
fundamental breakthroughs. This changes the paradigm for robot deployments.
■ Vendor specialization has increased, leading to solutions that have higher business
value, since an all-purpose/multipurpose device is either not possible or is less
valuable.
■ Growing interest in smart robots across a broad number of industries and use cases
like: medical/healthcare (patient care, medical materials handling, interdepartment
deliveries and sanitization); manufacturing (product assembly, stock replenishment,
support of remote operations and quality control [QC] check); last-mile delivery;
inspection of industrial objects or equipment; agriculture (harvesting and processing
crops); and workplace and concierge robots in workplaces, hospitality, hospitals and
so forth.
■ Hype and expectations will continue to build around smart robots during the next
few years, as providers expand their offerings and explore new technologies, like
reinforcement learning to drive a continuous loop of learning for robots and swarm
management.
■ The need to offload computation to the cloud will decrease from 2024, as robots
will make more autonomous decisions.
■ The continuous evolution of pricing models, like buy, monthly lease or hourly charge
versus robot as a service for robotic solutions can create some uncertainty for
organizations.
User Recommendations
■ Begin pilots designed to assess product capability and quantify benefits, especially
as ROI is possible even with small-scale deployments.
■ Examine current business processes for current deployment of smart robots and
also for large-scale deployment over the next three to five years.
■ Ensure there are sufficient cloud computing resources to support high-speed and
low-latency connectivity in the next two years.
■ Evaluate multiple global and regional providers due to fragmentation within the
robot landscape.
Ava Robotics; Geek+; GreyOrange; iRobot; Locus Robotics; Rethink Robotics; SoftBank
Robotics; Symbotic; Temi; UBTECH
Emerging Technologies: Top Use Cases for Smart Robots to Lead the Way in Human
Augmentation
Emerging Technologies: Top Use Cases Where Robots Interact Directly With Humans
Foundation Models
Analysis By: Arun Chandrasekaran
Maturity: Adolescent
Definition:
Foundation models are large-parameter models that are trained on a broad gamut of
datasets in a self-supervised manner. They are mostly based on transformer or diffusion
deep neural network architectures and will potentially be multimodal in the near future.
They are called foundation models because of their critical importance and applicability
to a wide variety of downstream use cases. This broad applicability is due to the
pretraining and versatility of the models.
Foundation models are an important step forward for AI due to their massive pretraining
and wide use-case applicability. They can deliver state-of-the-art capabilities with higher
efficacy than their predecessors. They’ve become the go-to architecture for NLP, and have
also been applied to computer vision, audio and video processing, software engineering,
chemistry, finance, and legal use cases. Primarily text-based, large language models
(LLMs) are a popular subset of foundation models. ChatGPT is based on one (GPT-4).
With their potential to enhance applications across a broad range of natural language use
cases, foundation models will have a wide impact across vertical industries and business
functions. Their impact has accelerated, with a growing ecosystem of startups building
enterprise applications on top of them. Foundation models will advance digital
transformation within the enterprise by improving workforce productivity, automating and
enhancing CX, and enabling rapid, cost-effective creation of new products and services.
Drivers
Foundation models:
■ Deliver superior natural language processing. The difference between these models
and prior neural network solutions is stark. The large pretrained models can produce
coherent text, code, images, speech and video at a scale and accuracy not possible
before.
■ Enable low-friction experimentation. The past year has seen an influx of foundation
models, along with smaller, pretrained domain-specific models built from them. Most
of these are available as cloud APIs or open-source projects, further reducing the
time and cost to experiment.
Obstacles
Foundation models:
■ Concentrate power. These models have been mostly built by the largest technology
companies with huge R&D investments and significant AI talent, resulting in a
concentration of power among a few large, deep-pocketed entities. This situation
may create a significant imbalance in the future.
User Recommendations
■ Create a strategy document that outlines the benefits, risks, opportunities and
execution plans for these models in a collaborative effort.
■ Plan to introduce foundation models into existing speech, text or coding programs.
If you have any older language processing systems, moving to a transformer-based
model could significantly improve performance. One example might be a text
interpretation, where transformers can interpret multiple ideas in a single utterance.
This shift in approach can significantly advance language interfaces by reducing the
number of interactions.
■ Start with models that have superior ecosystem support, have adequate enterprise
guardrails around security and privacy, and are more widely deployed.
■ Explore new use cases, such as natural language inference, sentiment analysis or
natural-language-based enterprise search, where the models can significantly
improve both accuracy and time to market.
Sample Vendors
Alibaba Group; Amazon; Baidu; Cohere; Google; Hugging Face; IBM; Microsoft; OpenAI;
Stability AI
Maturity: Adolescent
Definition:
Business Impact
Most technology products and services will incorporate generative AI capabilities in the
next 12 months, introducing conversational ways of creating and communicating with
technologies, leading to their democratization. Generative AI will progress rapidly in
industry verticals, scientific discovery and technology commercialization. Sadly, it will also
become a security and societal threat when used for nefarious purposes. Responsible AI,
trust and security will be necessary for safe exploitation of generative AI.
■ The hype around generative AI is accelerating. Currently, ChatGPT is the most hyped
technology. It relies on generative foundation models, also called “transformers.”
■ New foundation models and their new versions, sizes and capabilities are rapidly
coming to market. Transformers keep making an impact on language, images,
molecular design and computer code generation. They can combine concepts,
attributes and styles, creating original images, video and art from a text description
or translating audio to different voices and languages.
■ Machine learning (ML) and natural language processing platforms are adding
generative AI capabilities for reusability of generative models, making them
accessible to AI teams.
■ Hallucinations, factual errors, bias, a black-box nature and inexperience with a full AI
life cycle preclude the use of generative AI for critical use cases.
■ Some vendors will use generative AI terminology to sell subpar “generative AI”
solutions.
■ Generative AI can be used for many nefarious purposes. Full and accurate detection
of generated content, such as deepfakes, will remain challenging or impossible.
■ The compute resources for training large, general-purpose foundation models are
heavy and not affordable to most enterprises.
■ Identify initial use cases where you can improve your solutions with generative AI by
relying on purchased capabilities or partnering with specialists. Consult vendor
roadmaps to avoid developing similar solutions in-house.
■ Pilot ML-powered coding assistants, with an eye toward fast rollouts, to maximize
developer productivity.
■ Use synthetic data to accelerate the development cycle and lessen regulatory
concerns.
■ Mitigate generative AI risks by working with legal, security and fraud experts.
Technical, institutional and political interventions will be necessary to fight AI’s
adversarial impacts. Start with data security guidelines.
Sample Vendors
Emerging Tech Roundup: ChatGPT Hype Fuels Urgency for Advancing Conversational AI
and Generative AI
Maturity: Emerging
Definition:
Synthetic data is a class of data that is artificially generated rather than obtained from
direct observations of the real world. Synthetic data is used as a proxy for real data in a
wide variety of use cases including data anonymization, AI and machine learning
development, data sharing and data monetization.
A major problem with AI development today is the burden involved in obtaining real-world
data and labeling it. This time-consuming and expensive task can be remedied with
synthetic data. Additionally, for specific use-cases like training models for autonomous
vehicles, collecting real data for 100% coverage of edge cases is practically impossible.
Furthermore, synthetic data can be generated without personally identifiable information
(PII) or protected health information (PHI), making it a valuable technology for privacy
preservation.
Business Impact
■ Avoids using PII when training machine learning (ML) models via synthetic
variations of original data or synthetic replacement of parts of data.
■ Enables organizations to pursue new use cases for which very little real data is
available.
■ To meet increasing demand for synthetic data for natural language automation
training, especially for chatbots and speech applications, new and existing vendors
are bringing offerings to market. This is expanding the vendor landscape and driving
synthetic data adoption.
■ Synthetic data applications have expanded beyond automotive and computer vision
use cases to include data monetization, external analytics support, platform
evaluation and the development of test data.
■ There is an expansion to other data types. While tabular, image, video, text and
speech applications are common, R&D labs are expanding the concept of synthetic
data to graphs. Synthetically generated graphs will resemble, but not overlap the
original. As organizations begin to use graph technology more, we expect this
method to mature and drive adoption.
■ Synthetic data can have bias problems, miss natural anomalies, be complicated to
develop, or not contribute any new information to existing, real-world data.
■ Buyers are still confused over when and how to use the technology due to lack of
skills.
■ Synthetic data can still reveal a lot of sensitive details about an organization, so
security is a concern. An ML model could be reverse-engineered via active learning.
With active learning, a learning algorithm can interactively query a user (or other
information sources) to label new data points with the desired outputs, meaning
learning algorithms can actively query the user or teacher for labels.
■ If fringe or edge cases are not part of the seed dataset, they will not be synthetized.
This means the handling of such borderline cases must be carefully accommodated.
User Recommendations
■ Identify areas in your organization where data is missing, incomplete or expensive to
obtain, and is thus currently blocking AI initiatives. In regulated industries, such as
healthcare or finance, exercise caution and adhere to rules.
■ Measure and communicate the business value, success and failure stories of
synthetic data initiatives.
Anonos (Statice); Datagen; Diveplane; Gretel; Hazy; MOSTLY AI; Neuromation; Rendered.ai;
Tonic.ai; YData
Case Study: Enable Business-Led Innovation with Synthetic Data (Fidelity International)
Edge AI
Analysis By: Eric Goodness
Maturity: Adolescent
Definition:
Edge AI refers to the use of AI techniques embedded in non-IT products, IoT endpoints,
gateways and edge servers. It spans use cases for consumer, commercial and industrial
applications, such as autonomous vehicles, enhanced capabilities of medical diagnostics
and streaming video analytics. While predominantly focused on AI inference, more
sophisticated systems may include a local training capability to provide optimization of
the AI models at the edge.
Many edge computing use cases are latency-sensitive and data-intensive, and require an
increasing amount of autonomy for local decision making. This creates a need for AI-
based applications in a wide range of edge computing and endpoint solutions. Examples
include real-time analysis of edge data for predictive maintenance and industrial control,
inferences and decision support where connectivity is unreliable, or video analytics for
real-time interpretation of video.
Business Impact
■ Connectivity cost reduction, with less data traffic between the edge and the cloud
Drivers
Overall, edge AI has benefited from improvements in the capabilities of AI. This includes:
Business demand for new and improved outcomes solely achievable from the use of AI at
the edge, which include:
■ Rising demand for R&D in training decentralized AI models at the edge for adaptive
AI. These emerging solutions are driven by explicit needs such as privacy
preservation or the requirement for machines and processes to run in disconnected
(from the cloud) scenarios. Such models enable faster response to changes in the
environment, and provide benefits in use cases such as responding to a rapidly
evolving threat landscape in security operations.
■ The autonomy of edge AI-enabled solutions, built on some ML and deep learning
techniques, often presents questions of trust, especially where the inferences are not
readily interpretable or explainable. As adaptive AI solutions increase, these issues
will increase if initially identical models deployed to equivalent endpoints
subsequently begin to evolve diverging behaviors.
■ The lack of quality and sufficient data for training is a universal challenge across AI
usage.
■ Deep learning in neural networks is a compute-intensive task, often requiring the use
of high-performance chips with corresponding high-power budgets. This can limit
deployment locations, especially where small form factors and lower-power
requirements are paramount.
User Recommendations
■ Determine whether the use of edge AI provides adequate cost-benefit improvements,
or whether traditional centralized data analytics and AI methodologies are adequate
and scalable.
■ Assess the different technologies available to support edge AI and the viability of the
vendors offering them. Many potential vendors are startups that may have
interesting products but limited support capabilities.
■ Use edge gateways and servers as the aggregation and filtering points to perform
most of the edge AI and analytics functions. Make an exception for compute-
intensive endpoints, where AI-based analytics can be performed on the devices
themselves.
Akira AI; Edge Impulse; Falkonry; Imagimob; Litmus; MicroAI; Modzy; Octonion Group;
Palantir
ModelOps
Analysis By: Joe Antelmi, Erick Brethenoux, Soyeb Barot
Maturity: Emerging
Definition:
■ Lays down the foundation for the management of various knowledge representation
models, reasoning capabilities and composite model integration.
■ Augments the ability to manage decision models and integrate multiple analytics
techniques for robust decision making.
Drivers
■ The operationalization aspects of ML models are not new, but they are in their early
stages. However, with ModelOps, the functionalities provided by MLOps are now
extended to other non-ML models.
■ There’s a need to create resilient and adaptive systems that use a combination of
various analytical techniques for decision support, augmentation and automation.
■ There is a wide range of risk management concerns across different models — drift,
bias, explainability and integrity — that ModelOps helps address.
■ Leverage different analytics and AI techniques to increase the success rate of data
and analytics initiatives.
■ Utilize ModelOps best practices across data, models and applications to ensure
transition, reduce friction and increase value generation.
Sample Vendors
DataRobot; Datatron; IBM; McKinsey & Company (Iguazio); ModelOp; Modzy; SAS; Subex;
Valohai; Verta
Toolkit: Delivery Metrics for DataOps, Self-Service Analytics, ModelOps and MLOps
Maturity: Adolescent
Definition:
Knowledge graphs capture information about the world in an intuitive way yet are still
able to represent complex relationships. Knowledge graphs act as the backbone of a
number of products, including search, smart assistants and recommendation engines.
Knowledge graphs support collaboration and sharing, exploration and discovery, and the
extraction of insights through analysis. Generative AI models can be combined with
knowledge graphs to add trusted and verified facts to their outputs.
Business Impact
Knowledge graphs can drive business impact in a variety of different settings, including:
■ Data management (e.g., metadata management, data cataloging and data fabric)
Drivers
■ The emerging landscape of Web3 applications and the need for data access across
trust networks, leading to the creation of decentralized knowledge graphs to build
immutable and queryable data structures.
■ Improvements in graph DBMS technology that can handle the storage and
manipulation of graph data structures at scale. These include PaaS offerings that
take away the complexity of provisioning and optimizing hardware and
infrastructure.
■ The need to manage the increasing number of data silos where data is often
duplicated, and where meaning, usage and consumption patterns are not well-
defined.
■ The use of graph algorithms and machine learning to identify influencers, customer
segments, fraudulent activity and critical bottlenecks in complex networks.
Obstacles
■ Awareness of knowledge graph use cases is increasing, but business value and
relevance are difficult to capture in the early implementation stages.
■ The graph DBMS market is fragmented along three properties: type of data model
(RDF or property), implementation architecture (native or multimodal) and optimal
workload (operational or analytical). This fragmentation continues to cause
confusion and hesitation among adopters.
User Recommendations
■ Create a working group of knowledge graph practitioners and sponsors by
assessing the skills of D&A leaders and practitioners and business domain experts.
Highlight the obstacles to dependable and efficient data delivery for analytics and AI,
and articulate how knowledge graphs can remove them.
■ Run a pilot to identify use cases that need custom-made knowledge graphs. The
pilot should deliver not only tangible value for the business, but also learning and
development for D&A staff.
■ Create a minimum viable subset that can capture the information of a business
domain to decrease time to value. Assess the data, both structured and
unstructured, needed to feed a knowledge graph, and follow Agile development
principles.
■ Utilize vendor and service provider expertise to validate use cases, educate
stakeholders and provide an initial knowledge graph implementation.
■ Include knowledge graphs within the scope of D&A governance and management.
To avoid perpetuating data silos, investigate and establish ways for multiple
knowledge graphs to interoperate and extend toward a data fabric.
Sample Vendors
Definition:
Artificial intelligence (AI) maker and teaching kits are applications and software
development kits (SDKs) that abstract data science platforms, frameworks, analytic
libraries and devices to enable software engineers to incorporate AI into new or existing
applications. Maker kits also emphasize teaching new skills and best practices for
integration between software and devices for engineers — some of which also include
hardware devices.
AI maker kits package developer-friendly APIs and SDKs while often complemented by
custom hardware devices (such as cameras, musical instruments, speakers or vehicles).
These offerings encourage platform developer adoption while educating developers
around new AI capabilities and libraries.
Business Impact
The demand for AI is significant and is increasing at a rate beyond what experienced data
scientists can meet alone — with many software engineering teams leading AI
development and use cases. Also, the number of sensors and data-centric enablers for AI
use cases is rapidly growing. These offerings will equip software developers to become a
key contingent for AI development and implementation. AI maker kits will also continue to
reduce adoption barriers in the deployment of AI capabilities for software engineers and
citizen data scientists.
■ As the demand for more proficient data scientists rises, the adoption of AI maker and
teaching kits will continue to increase.
■ Within many kits, developers can deploy prebuilt models and, optionally, update
those models from cloud services at model runtime.
■ Vendor offerings require distinct deployment considerations and have varied feature
coverage differences, but we expect greater consistency in the future.
■ Data scale and management strategies can be overlooked as ideas move beyond a
POC stage.
■ Kits support only a limited set of native use cases, such as computer vision, image
recognition, labeling, natural language and text analytics.
■ Market offerings are typically mutually exclusive in terms of the use cases
supported, usually being singular (i.e., a computer vision kit and a kit supporting
natural language processing [NLP] have no shared components or platforms).
■ Market offerings do not follow a consistent set of standards, and kits have an
inconsistent level of support/capabilities for production-ready use cases. Some
support scaling development concepts to full-scale production use cases, while
others offer no path from development-only scenarios.
User Recommendations
■ Leverage maker kits to upskill developer knowledge and skills, which can translate to
present and future enterprise needs that may directly or indirectly relate to kit-specific
use cases.
■ Carefully evaluate and stress-test employed maker kit offerings, along with fully
understanding the going concern support for each specific offering.
■ Ensure deployed capabilities are aligned to direct end-user benefits that cannot be
easily achieved without AI.
Amazon Web Services; Google; Intel; Microsoft; NVIDIA; Pantech Prolabs India; Rotrics;
Samsung Electronics
Emerging Technologies: Top Use Cases for Smart Robots to Lead the Way in Human
Augmentation
Autonomous Vehicles
Analysis By: Jonathan Davenport
Maturity: Emerging
Definition:
Autonomous vehicles use various onboard sensing and localization technologies, such as
lidar, radar, cameras, global navigation satellite system (GNSS) and map data, in
combination with AI-based decision making, to drive without human supervision or
intervention. Autonomous vehicle technology is being applied to passenger vehicles,
buses and trucks, as well as for specific use cases such as mining and agricultural
tractors.
Business Impact
■ Other companies are quickly following, with Mercedes-Benz being the first
automotive manufacturer worldwide to secure internationally valid system approval
and has launched in Germany. Its Level 3 solution has secured approval from the
state of Nevada and an application to enable cars to drive autonomously in
California has also been made.
■ In China Changan, Great Wall Motor and Xpeng have announced Level 3 systems.
Other global automakers are following suit. Hyundai’s new Genesis G90 and the Kia
EV9 vehicles will come equipped with a Level 3 Highway Driving Pilot (HDP)
function.
■ This signals that the autonomous vehicle market is most likely to evolve gradually
from ADAS systems to higher levels of autonomy on passenger vehicles, rather than
seeing a robotaxi-based revolution. This will require flexible vehicle operational
design domains (ODDs). Progress is being made by companies like Mobileye who’s
perception system was developed on the roads of Israel, but required minimal
retraining to perform well in diverse cities like Munich and Detroit.
■ The most compelling business case for autonomous vehicles relates to self-driving
trucks. Driver pay is one of the largest operating costs for fleets associated with a
commercial truck, plus goods can be transported much faster to their destination
because breaks are no longer necessary. The Aurora Driver product is now at a
“feature complete” stage, with a plan to launch a “middle-mile” driverless truck
service at the end of 2024.
Analyst Notes:
■ Volvo’s EX90 vehicles are being deployed with hardware-ready for unsupervised
autonomous driving (including a lidar from Luminar), despite the self-driving
software not being ready for deployment. Volvo plans to deploy an over-the-air
software to move capability from Level 2 ADAS system to Level 3 in the future.
■ Slow progress saw Ford and VW pull their investments in Argo AI at the end of 2022,
causing the joint venture’s operation to close. VW had invested approximately 2
billion Euros in the company.
User Recommendations
Governments must:
■ Work closely with autonomous vehicle developers to ensure that first responders can
safely respond to road traffic and other emergencies and self-driving vehicles don’t
obstruct or hinder activities.
Traditional fleet operators looking to adopt autonomous technology into their fleets
should:
■ Minimize the disruptive impact on driving jobs (bus, taxi and truck drivers) by
developing policies and programs to train and migrate these employees to other
roles.
■ Instigate a plan for how higher levels of autonomy can be deployed to vehicles being
designed and manufactured to future-proof vehicle purchases and enable future
functions-as-a-service revenue streams.
Sample Vendors
Aurora; AutoX; Baidu; Cruise; Mobileye; NVIDIA; Oxbotica; Pony.ai; Waymo; Zoox
Tech Providers 2025: Product Leaders Must Strategize to Win in the Evolving Robotaxi
Ecosystem
Definition:
AI cloud services provide AI model building tools, APIs for prebuilt services and associated
middleware that enable the building/training, deployment and consumption of machine
learning (ML) models running on prebuilt infrastructure as cloud services. These services
include vision and language services and automated ML to create new models and
customize prebuilt models.
The use of AI cloud services continues to increase. Vendors have introduced additional
services and solutions with fully integrated MLOps pipelines. The addition of low-code
tools has added to ease of use. Applications regularly use AI cloud services in language,
vision and automated ML to automate and accelerate business processes. Developers are
aware of these offerings, and are using both prebuilt and customized ML models in
applications.
Business Impact
The impact of AI extends to the applications that enable business, allowing developers
and data scientists to enhance the functionality of these applications. The desire for data-
driven decisions in business is driving the incorporation of forecasts and next best
actions, including automation of many workflows. AI cloud services enable the
embedding of advanced machine learning models in applications that are used to run the
day-to-day business operations.
Drivers
■ Opportunities to capitalize on new insights. The wealth of data from both internal
and third-party sources delivers insights that enable data-driven decision
intelligence.
■ Reduced barriers of entry. The ability to do zero-shot learning and model fine-tuning
has reduced the need for large quantities of data to train models. Access for
developers and citizen data scientists to AI and ML services due to the availability of
API callable cloud-hosted services will expand the use of AI.
Obstacles
■ Lack of understanding by developers and citizen data scientists about how to adapt
these services to specific use cases.
■ Pricing models for AI cloud services that are usage-based presents a risk for
businesses as the costs associated with use of these services can accrue rapidly.
■ Increased need for packaged solutions that utilize multiple services for developers
and citizen data scientists.
■ Lack of marketplaces for prebuilt ML models that can be adapted for specific
enterprise use cases.
■ Continuing need for ModelOps tools that enable integration of AI and ML models
into applications.
User Recommendations
■ Use AI cloud services to build less complex models, giving the benefit of more
productive AI while freeing up your data science assets for higher-priority projects.
■ Establish a center of excellence for responsible use of AI that includes all functional
areas of the business. This is especially important in light of the advances of
generative AI solutions.
Sample Vendors
Alibaba; Amazon Web Services; Baidu; Clarify; Google; H2O.ai; Huawei; IBM; Microsoft;
Tencent
Intelligent Applications
Analysis By: Alys Woodward, Justin Tung, Stephen Emmott
Maturity: Adolescent
Artificial intelligence (AI) is the current competitive play for enterprise applications with
many technology providers now enabling AI and machine learning (ML) in their products
via inbuilt, added, proxied or custom capabilities. Bringing intelligence into applications
enables them to work autonomously across a wider range of scenarios with elevated
quality and productivity, and reduced risk. Integrated intelligence can also support
decision-making processes alongside transactional processes.
Business Impact
■ Automation — They increase automated and dynamic decision making, reducing the
cost and unreliability of human intervention, and improve the effectiveness of
business processes.
■ AI capabilities and features are increasingly being integrated into ERP, CRM, digital
workplace, supply chain and knowledge management software within enterprise
application suites. LLMs can be added on top to replace interfaces.
■ Trust in system-generated insights — It takes time for business users to see the
benefit and trust such insights and some explainability is key.
■ The rapid rise of conversational AI UIs — Since December 2022, the rise of ChatGPT
and similar applications has sparked great interest and activity in chat interfaces,
and adds the ability to compose a conversational layer on top of existing legacy
applications.
User Recommendations
■ Bring AI components into your composable enterprise to innovate faster and safer, to
reduce costs by building reusability, and to lay the foundation for business-IT
partnerships. Remain aware of what makes AI different, particularly how to refresh
ML models to avert implementation and usage challenges.
ClayHR; Creatio; Eightfold AI; JAGGAER; Prevedere; Pricefx; Salesforce; Sievo; SugarCRM;
Trust Science
Definition:
Data labeling and annotation (DL&A) is a process where data assets are further classified,
segmented, annotated and augmented to enrich data for better analytics and artificial
intelligence (AI) projects. Associated services and platforms route and allocate these
tasks to both internal staff and external third-party knowledge workers to optimally
manage the required workflows and thus improve the quality of training data.
The need for better training data has increased to remove the bottleneck in developing AI
solutions — especially those particular to generative AI and industry use cases. Given the
typical lack of internal skills and systems, DL&A services and tools are often the best
option (by cost, quality and availability) to provide necessary data for best AI results.
Today, at least, some AI solutions would not be possible at their current levels without
human-based labeling and its further automation.
Business Impact
Drivers
■ Increased diversity of use cases: These services can accelerate and unlock a wealth
of use cases across all industries, and core competencies in natural language
automation and computer vision. Vendors in the marketplace today have dedicated
offerings for commerce, robotics and autonomous vehicles, retail, GIS/maps, AR/VR,
agriculture, finance, manufacturing and transportation, and communications.
■ Generative AI methods allow lowering the cost of DL&A through automation. LLMs
are increasingly used to extract labels from text data through zero-shot learning.
Obstacles
■ Supply outstrips demand and price points are often uneconomical for large-scale
data: Many vendors have entered this space in the last few years, and demand from
buyers does not yet match supply. Pricing and business models vary considerably
among providers, and buyers find it difficult to estimate costs.
■ Security concerns: Especially for those DL&A services that bring in public crowds,
many clients feel uneasy distributing certain data to virtually unknown parties.
■ Ensure the provider you choose has methods to test its pool of knowledge workers
for domain expertise and measures of accuracy and quality.
■ Model costs to avoid surprises by exploring and estimating the spend across the
variety of business models, which range from label volumes and project-based to per
annotator/seat costs.
■ Allow data scientists to focus on more valuable tasks and lighten their load in
classifying and annotating data by using DL&A services.
■ Use vendors with real-time human-in-the-loop solutions for production systems like
chatbots and recommenders to handle low-confidence thresholds, spikes in demand
or access to real-time knowledge not present in the enterprise.
Sample Vendors
CrowdWorks; Defined.ai; Diffgram; Heartex; Isahit; Labelbox; Mindy Support; Scale AI;
Snorkel AI
Emerging Tech: Tech Innovators in Synthetic Data for Image and Video Data — Domain-
Focused
Computer Vision
Analysis By: Nick Ingelbrecht, Shubhangi Vashisth
Definition:
Computer vision is a set of technologies that involve capturing, processing and analyzing
real-world images and videos to extract meaningful, contextual information from the
physical world.
Business Impact
Computer vision technologies are used across all industries and address a broad and
growing range of business applications. These include physical security, retail and
commercial property, automotive, robotics, healthcare, manufacturing, supply
chain/logistics, banking and finance, agriculture, government, media and entertainment,
and Internet of Things (IoT). Computer vision exploits the visible and nonvisible spectrum,
including infrared, hyperspectral imaging, lidar, radar and ultraviolet.
Drivers
■ New business models and applications are emerging, ranging from smartphone
cameras and fun filters, through to global video content production and distribution,
life-saving medical image diagnostics, autonomous vehicles, video surveillance for
security, robotics and manufacturing automation.
Obstacles
Enterprises struggle with how best to exploit their visual information assets and automate
the analysis of exponential volumes of image data:
■ High-end systems are expensive to maintain and support, and building business
cases with adequate ROI is challenging.
■ Integration with existing systems is problematic due to a lack of open interfaces, off-
the-shelf solutions and plug-and-play capabilities.
■ Adequate training and testing data may be hard or expensive to acquire, especially in
areas where available open-source computer vision datasets are declining.
User Recommendations
■ Focus initially on a few small projects, using fail-fast approaches and scale the most
promising systems into production using cross-disciplinary teams.
■ Test production systems early in the real-world environment because lighting, color,
object disposition and movement can break computer vision solutions that worked
well in the development cycle.
■ Build internal computer vision competencies and processes for exploiting image and
video assets.
■ Reduce the barrier to computer vision adoption by addressing two of the main
challenges, lack of training data and costly and constrained hardware, by investing
in synthetic and augmented data solutions and model compression to improve
model performance and expand the range of more valuable use cases.
Sample Vendors
Amazon Web Services; Baidu; Clarifai; Deepomatic; Google; Matroid; Microsoft Azure;
Tencent
Appendixes
See the previous Hype Cycle: Hype Cycle for Artificial Intelligence, 2022
Tool: Create Your Own Hype Cycle With Gartner’s Hype Cycle Builder
The Future of AI: Reshaping Society
Innovation Insight for Generative AI
© 2023 Gartner, Inc. and/or its affiliates. All rights reserved. Gartner is a registered trademark of
Gartner, Inc. and its affiliates. This publication may not be reproduced or distributed in any form
without Gartner's prior written permission. It consists of the opinions of Gartner's research
organization, which should not be construed as statements of fact. While the information contained in
this publication has been obtained from sources believed to be reliable, Gartner disclaims all warranties
as to the accuracy, completeness or adequacy of such information. Although Gartner research may
address legal and financial issues, Gartner does not provide legal or investment advice and its research
should not be construed or used as such. Your access and use of this publication are governed by
Gartner's Usage Policy. Gartner prides itself on its reputation for independence and objectivity. Its
research is produced independently by its research organization without input or influence from any
third party. For further information, see "Guiding Principles on Independence and Objectivity." Gartner
research may not be used as input into or for the training or development of generative artificial
intelligence, machine learning, algorithms, software, or related technologies.
High Data Labeling and AI Maker and Teaching Kits AI Engineering Neuro-Symbolic AI
Annotation AI TRiSM AI Simulation
Edge AI Causal AI ModelOps
Cloud AI Services Multiagent Systems
Data-Centric AI Operational AI Systems
Knowledge Graphs Smart Robots
Prompt Engineering
Synthetic Data
Moderate
Low
Phase Definition
Peak of Inflated Expectations During this phase of overenthusiasm and unrealistic projections, a flurry of
well-publicized activity by technology leaders results in some successes, but
more failures, as the innovation is pushed to its limits. The only enterprises
making money are conference organizers and content publishers.
Trough of Disillusionment Because the innovation does not live up to its overinflated expectations, it
rapidly becomes unfashionable. Media interest wanes, except for a few
cautionary tales.
Slope of Enlightenment Focused experimentation and solid hard work by an increasingly diverse
range of organizations lead to a true understanding of the innovation’s
applicability, risks and benefits. Commercial off-the-shelf methodologies and
tools ease the development process.
Plateau of Productivity The real-world benefits of the innovation are demonstrated and accepted.
Tools and methodologies are increasingly stable as they enter their second
and third generations. Growing numbers of organizations feel comfortable
with the reduced level of risk; the rapid growth phase of adoption begins.
Approximately 20% of the technology’s target audience has adopted or is
adopting the technology as it enters this phase.
Years to Mainstream Adoption The time required for the innovation to reach the Plateau of Productivity.
Transformational Enables new ways of doing business across industries that will result in
major shifts in industry dynamics
High Enables new ways of performing horizontal or vertical processes that will
result in significantly increased revenue or cost savings for an enterprise
Low Slightly improves processes (for example, improved user experience) that will
be difficult to translate into increased revenue or cost savings