Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

On this page

    Ready to accelerate AI development?

    Share

    Generative AI essentials: what everyone needs to know about GenAI

    Written by: Matt Casey
    Published: August 16, 2023
    Share

    Experts have named generative AI as the most transformative technology of the last decade. GenAI has—and will continue to—alter our relationship with AI as it pushes computers into domains previously reserved for humans, such as writing and art creation.

    So, what is generative AI, how does it work and why does it matter? We’ll address all of that in this concise guide. Neither exhaustive nor comprehensive, this guide aims to explain the most important aspects of GenAI that everyone should know in plain and accessible language.

    Everyone should be sure to read the Five Takeaways section at the bottom of this post.

    What is generative AI?

    Put simply: generative artificial intelligence (AI) is any AI application that creates open-ended output.

    The vast majority of AI systems fall under the category of discriminative AI. These systems predict numeric or categorical values about particular subjects based on their mathematical understanding of the feature space. When a rideshare app estimates a driver’s arrival time or a streaming platform recommends a movie, that’s discriminative AI.

    Generative AI uses representations extracted from training data to generate new kinds of content in the chosen medium. An image-based generative AI, for example, learns and mirrors common patterns that define human faces.

    A generative AI model generally works in one or more of the following media:

    • Text
    • Images
    • Audio
    • Video

    The applications for this range widely—from colorizing images to writing emails to generating likely protein structures.

    “I think of generative AI as artificial intelligence that can produce output that is open-ended. The output is one of an infinite number of possibilities.”

    Chris Glaze
    Snorkel researcher

    How does generative AI work?

    Like other forms of artificial intelligence, generative AI starts with math. And lots of it.

    But math covers only the model itself. Developers’ impressive generative applications wrap the model in a stack of other technologies and train it through mountains of data.

    The models themselves

    The models at the core of generative AI layer millions or billions of small numbers and equations in a structure called a “neural network.”These numeric machines take in the basic signals produced by a medium (light at different frequencies, letters in sequence) and turn them into a long, ordered series of numbers called “vectors”.

    How does that result in pictures, sonnets, and deep-faked voices? The software immediately surrounding the models converts input data into vectors and converts it back on the other end.

    Words, sounds, pixels—they all become collections of numbers. And that lets the model do its work.

    The data

    Data powers generative AI. By showing a model an enormous number of examples, the model learns to mimic them—and even devise variations. The models learn the math that enables predicting that the night sky should be dark and the word “pepperoni” is more likely to precede “pizza” than “conifer.”

    In general, the more data you feed a generative model (and the higher its quality) the better the model performs.

    The generative AI stack

    Like most technologies, generative AI has a “stack.” The exact formation of this stack will vary, but—from a high level—you can think of it like this:

    1. Application layer.
    2. Supporting tools and resources.
    3. Model.
    4. Data.
    A very simplified generative AI (genAI) stack.

    The application layer is the interface that users experience—the chat interface for ChatGPT, for example.

    Supporting tools and resources encompass everything from the cloud service that hosts the model to services that monitor the application’s performance.

    The model layer is essential, but more replaceable than you might think. Models start with “architectures”—particular arrangements of neural networks—that data scientists train on data. Sometimes these applications start with foundation models (FMs), which have already undergone a certain amount of training. Data scientists then customize these foundation models with additional data before deployment.

    The bottom layer—the most irreplaceable part of the stack—is the data itself.

    Why does generative AI matter?

    Generative AI introduces computers to domains previously reserved for humans. This will amplify the capabilities of artists, managers, and workers.

    Key benefits

    Generative AI already amplifies human capabilities in a variety of ways, such as the following:

    • Increased productivity. A study from the National Bureau of Economic Research found that call center workers given access to a large language model-powered application solved 14% more issues per hour than their peers. This benefit impacted the least experienced workers the most.
    • Faster coding. A GitHub study found that 92% of surveyed software developers in the U.S. used AI tools to develop code faster, and 70% said it gave them an advantage in their jobs.
    • Quick prototyping and correction. Film director Joe Penna told the audience at Snorkel AI’s Foundation Model summit that generative AI tools allow people like him to make movies faster and cheaper. He demonstrated applications that let him simulate shots prior to filming and even re-light or re-angle them in post-production.

    As the field matures, developers will find additional ways for GenAI to amplify human performance.

    Generative AI == GenAI

    AI developers use these terms interchangeably.

    The potential risk of generative AI

    Generative AI—like any other technology—is a tool. All tools can be used negatively.

    The main, top-of-mind dangers of GenAI fall into three categories:

    1. Intellectual property violation. Artists and rights holders have complained that systems like Stable Diffusion and Dall-E learned how to mimic their work by consuming it. Writers have made similar complaints about generative language models, and have sued over IP theft.
    2. Disinformation at scale. Generative tools allow propagandists to churn out faked documents, photos, and even videos at a never-before-seen velocity and precision. In addition to bolstering narratives through lies, this new reality will make it harder to trust the media we consume.
    3. Encoded bias. Generative models mirror their training data. This can lead to unintentionally encoding racist or sexist attitudes into its creations, such as when a researcher found that ChatGPT stubbornly assumed nurses are women and doctors are men.

    While large players in the space can make ethical choices (and have already promised to do so), those choices can themselves invite controversy. Efforts to mitigate encoded bias in Google’s Gemini’s image generation features stirred an uproar when it generated a racially diverse image of the United States’ founding fathers. In addition, choices such as these will not stop bad actors from stealing intellectual property or deliberately destabilizing our information ecosystem.

    The future of generative AI

    As businesses and consumers become increasingly familiar with generative AI, two things will happen:

    1. GenAI’s most valuable applications will become clear.
    2. Expectations for the technology will radically realign.

    Gartner’s 2023 Hype Cycle for AI noted that the gap between expectations and outcomes on GenAI is currently quite wide.

    Generative AI is not magic. But it will change workplaces. EY’s Ken Pryadarshi told the audience at our Future of Data-Centric AI conference that he imagines a near future where financial professionals perform their roles with the aid of GenAI-powered assistants. The call center findings and GitHub’s copilot survey point to just this kind of future across many professions.

    Workers and businesses that start learning how to best use these tools will likely benefit the most in the long term.

    Frequently asked questions (FAQs) about generative AI

    This section will answer some common questions about generative AI.

    Most generative AI tasks defy accuracy metrics. How do you gauge the accuracy of a haiku? Researchers will often use relative measures of performance involving human preference data in which people compare generated output with some baseline (from another model or something a human wrote). Researchers have also applied generative AI to discriminative tasks and found generative models to be less accurate than discriminative models specialized for the same task.

    Generative AI has many limitations. The models are enormous, expensive to run, and slow. In addition, their training data constrains their knowledge and makes them prone to accidental misinformation. When prompted to address topics outside their training data, the model can “hallucinate” answers. The technology also comes with its share of potential dangers.

    No. Generative AI makes many tasks easier and faster, but it does not eliminate the need for humans. As the Harvard Business Review said: AI Won’t Replace Humans — But Humans With AI Will Replace Humans Without AI.

    When most people think of generative AI, they think of ChatGPT. Other well-known GenAI applications include Bing Chat, Dall-E, Bard, and Midjourney. But it also includes applications based on generative adversarial networks, including the applications that created convincing pictures of human faces and many filters on applications like Snapchat and Instagram.

    Generative AI’s biggest challenges come from its training data. Gathering sufficient high-quality training data for these enormous models—and ensuring its accuracy—takes substantial effort. That challenge is growing. Most GenAI training data comes from the internet, which has seen an explosion in AI-generated content. Researchers have found that training GenAI on AI-generated content causes the model to perform worse.

    Final thoughts: Five things everyone must know about GenAI

    If you take nothing else from this article, take away these five points.

    1. Generative AI mirrors its training data. This allows developers to specialize their generative models, but it also means the models amplify biases in training data.
    2. It’s mostly math. The neural networks at the core of generative AI are black boxes, but the millions, billions, or trillions of small calculations they perform enable their creations.
    3. GenAI is already making people more productive. From AI-aided code to Bing’s ChatGPT-aided search, people are already getting things done faster due to GenAI.
    4. GenAI will only become more prevalent. Nearly half of respondents to a Snorkel poll said they expected their company to use LLMs in production by the end of 2023, and Gartner declared genAI the most transformative tech of the last decade.
    5. It’s not magic. GenAI will solve some problems and ease others. It will not replace all humans. But humans with AI will replace humans without AI.

    Learn More

    Follow Snorkel AI on LinkedInTwitter, and YouTube to be the first to see new posts and videos!

    Subscription-Hub Form

    By submitting this form, I acknowledge I will receive email updates from Snorkel AI, and I agree to the Terms of Use and acknowledge that my information will be used in accordance with the Privacy Policy.