Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Discover millions of ebooks, audiobooks, and so much more with a free trial

From $11.99/month after trial. Cancel anytime.

Docker Deep Dive: Zero to Docker in a single book
Docker Deep Dive: Zero to Docker in a single book
Docker Deep Dive: Zero to Docker in a single book
Ebook469 pages3 hours

Docker Deep Dive: Zero to Docker in a single book

Rating: 0 out of 5 stars

()

Read preview
LanguageEnglish
Release dateJul 5, 2024
ISBN9781835884393
Docker Deep Dive: Zero to Docker in a single book

Read more from Nigel Poulton

Related to Docker Deep Dive

Computers For You

View More

Reviews for Docker Deep Dive

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Docker Deep Dive - Nigel Poulton

    0: About the book

    This is a book about Docker and containers; no prior knowledge required! In fact, the book’s motto is Zero to Docker in a single book.

    So, if you want to work with cloud and cloud-native technologies, this book is dedicated to you.

    Why should I read this book or care about Docker?

    Docker is here, and it’s changed the world. If you want the best jobs working with the best technologies, you need to know Docker and containers. They’re even central to Kubernetes, and a strong Docker skill set will help you learn Kubernetes. Docker and containers are also well-positioned for emerging cloud technologies such as WebAssembly and AI workloads.

    What if I’m not a developer

    Most applications, even modern cloud-native microservices, need high-performance production-grade infrastructure. If you think traditional developers will take care of this, think again. To cut a long story short, if you want to thrive in the modern cloud-first world, you must know Docker. But don’t stress, this book will give you all the skills you need.

    How I’ve organized the book

    I’ve divided the book into two main sections:

    The big picture stuff

    The technical stuff

    The big picture stuff gets you up to speed with things like what Docker is, why we have containers, and the fundamental jargon such as cloud-native, microservices, and orchestration.

    The technical stuff section covers everything you need to know about images, containers, multi-container microservices apps, and the increasingly important topic of orchestration. It even covers WebAssembly, vulnerability scanning with Docker Scout, debugging containers, high availability, and more.

    Chapter breakdown

    Chapter 1: Summarises the history and potential future of Docker and containers

    Chapter 2: Explains the most important container-related standards and projects

    Chapter 3: Shows you a few ways to get Docker

    Chapter 4: Walks you through a very simple hands-on container workflow

    Chapter 5: Explains the architecture of the Docker Engine

    Chapter 6: Dives deep into images and image management

    Chapter 7: Dives deep into containers and container management

    Chapter 8: Walks you through the process of containerizing an app

    Chapter 9: Shows you how to build, deploy, and manage multi-container apps with Compose

    Chapter 10: Walks you through building a secure swarm

    Chapter 11: Deploys and manages a multi-container app on a secure swarm

    Chapter 12: Walks you through building and containerizing a WebAssembly app

    Chapter 13: Dives into Docker networking

    Chapter 14: Builds and tests Docker overlay networks

    Chapter 15: Introduces you to persistent and non-persistent data in Docker

    Chapter 16: Covers all the major Linux and Docker security technologies

    Editions and updates

    Docker and the cloud-native ecosystem are evolving fast, and a 2-3-year-old book on Docker isn’t valuable. As a result, I’m committed to updating the book every year.

    If that sounds excessive, welcome to the new normal.

    The book is available in hardback, paperback, and e-book on all good book publishing platforms.

    When you purchase the Kindle edition, you’re entitled to all future updates. However, Kindle doesn’t always download the latest edition.

    A potential solution is to go to http://amzn.to/2l53jdg and choose Quick Solutions. Then select Digital Purchases, search for your Docker Deep Dive Kindle edition purchase, and select Content and Devices. Your purchase should appear in the list with a button that says Update Available. Click that button. Delete your old version on your Kindle and download the new one.

    If this doesn’t work, your only option is to contact Kindle Support.

    Feedback

    If you like the book and it helps your career, share the love by recommending it to a friend and leaving a review on Amazon or Goodreads.

    If you spot a typo or want to make a recommendation, email me at ddd@nigelpoulton.com

    That’s everything. Let’s get rocking with Docker!

    Part 1: The big picture stuff

    1: Containers from 30,000 feet

    Containers have taken over the world!

    In this chapter, you’ll learn why we have containers, what they do for us, and where we can use them.

    The bad old days

    Applications are the powerhouse of every modern business. When applications break, businesses break.

    Most applications run on servers, and in the past, we were limited to running one application per server. As a result, the story went something like this:

    Every time a business needed a new application, it had to buy a new server. Unfortunately, we weren’t very good at modeling the performance requirements of new applications, and the IT departments had to guess. This often resulted in businesses buying very expensive servers with a lot more performance capability than the apps needed. After all, nobody wanted underpowered servers incapable of handling the app, resulting in unhappy customers and lost revenue. As a result, companies ended up with racks and racks of overpowered servers operating as low as 5-10% of their potential capacity. This was a tragic waste of company capital and environmental resources!

    Hello VMware!

    Amid all this, VMware, Inc. gave the world a gift — the virtual machine (VM).

    As soon as VMware came along, the world became much better. We finally had a technology that allowed us to safely run multiple business applications on a single server.

    It was a game-changer. Businesses could run new apps on the spare capacity of existing servers, spawning a golden age of maximizing the value of existing assets.

    VMwarts

    But, and there’s always a but! As great as VMs are, they’re far from perfect.

    A feature of the VM model is every VM needing its own dedicated operating system (OS). Unfortunately, this has several drawbacks, including:

    Every OS consumes CPU, RAM, and other resources we’d rather use on applications

    Every VM and OS needs patching

    Every VM and OS needs monitoring

    VMs are also slow to boot and not very portable.

    Hello Containers!

    While most of us were reaping the benefits of VMs, web scalers like Google had already moved on from VMs and were using containers.

    A feature of the container model is that every container shares the OS of the host it’s running on. This means a single host can run more containers than VMs. For example, a host that can run 10 VMs might be able to run 50 containers, making containers far more efficient than VMs.

    Containers are also faster and more portable than VMs.

    Linux containers

    Modern containers started in the Linux world and are the product of incredible work from many people over many years. For example, Google contributed many container-related technologies to the Linux kernel. It’s thanks to many contributions like these that we have containers today.

    Some of the major technologies behind modern containers include; kernel namespaces, control groups (cgroups), capabilities, and more.

    However, despite all this great work, containers were incredibly complicated, and it wasn’t until Docker came along that they became accessible to the masses.

    Note: I know that many container-like technologies pre-date Docker and modern containers. However, none of them changed the world the way Docker has.

    Hello Docker!

    Docker was the magic that made Linux containers easy and brought them to the masses. We’ll talk a lot more about Docker in the next chapter.

    Docker and Windows

    Microsoft worked hard to bring Docker and container technologies to the Windows platform.

    At the time of writing, Windows desktop and server platforms support both of the following:

    Windows containers

    Linux containers

    Windows containers run Windows apps and require a host system with a Windows kernel. Windows 10, Windows 11, and all modern versions of Windows Server natively support Windows containers.

    Windows systems can also run Linux containers via the WSL 2 (Windows Subsystem for Linux) subsystem.

    This means Windows 10 and Windows 11 are great platforms for developing and testing Windows and Linux containers.

    However, despite all the work developing Windows containers, almost all containers are Linux containers. This is because Linux containers are smaller and faster, and more tooling exists for Linux.

    All of the examples in this edition of the book are Linux containers.

    Windows containers vs Linux containers

    It’s vital to understand that containers share the kernel of the host they’re running on. This means containerized Windows apps need a host with a Windows kernel, whereas containerized Linux apps need a host with a Linux kernel. However, as mentioned, you can run Linux containers on Windows systems that have the WSL 2 backend installed.

    What about Mac containers?

    There is no such thing as Mac containers. However, Macs are great platforms for working with containers, and I do all of my daily work with containers on a Mac.

    The most popular way of working with containers on a Mac is Docker Desktop. It works by running Docker inside a lightweight Linux VM on your Mac. Other tools, such as Podman and Rancher Desktop, are also great for working with containers on a Mac.

    What about WebAssembly

    WebAssembly (Wasm) is a modern binary instruction set that builds applications that are smaller, faster, more secure, and more portable than containers. You write your app in your favorite language and compile it as a Wasm binary, and it’ll run anywhere you have a Wasm runtime.

    However, Wasm apps have many limitations, and we’re still developing many of the standards. As a result, containers remain the dominant model for cloud-native applications.

    The container ecosystem is also much richer and more mature than the Wasm ecosystem.

    As you’ll see in the Wasm chapter, Docker and the container ecosystem are adapting to work with Wasm apps, and you should expect a future where VMs, containers, and Wasm apps run side-by-side in most clouds and applications.

    This book is up-to-date with the latest Wasm and container developments.

    What about Kubernetes

    Kubernetes is the industry standard platform for deploying and managing containerized apps.

    Terminology: A containerized app is an application running as a container. We’ll cover this in a lot of detail later.

    Older versions of Kubernetes used Docker to start and stop containers. However, newer versions use containerd, which is a stripped-down version of Docker optimized for use by Kubernetes and other platforms.

    The important thing to know is that all Docker containers work on Kubernetes.

    Check out these resources if you need to learn Kubernetes:

    Quick Start Kubernetes: This is ~100 pages and will get you up-to-speed with Kubernetes in one day!

    The Kubernetes Book. This is the ultimate book for mastering Kubernetes.

    I update both books annually to ensure they’re up-to-date with the latest and greatest developments in the cloud native ecosystem, including WebAssembly.

    Chapter Summary

    We used to live in a world where every time the business needed a new application, we had to buy a brand-new server. VMware came along and allowed us to drive more value out of new and existing servers. However, following the success of VMware and hypervisors came a newer, more efficient, and portable virtualization technology called containers. However, containers were complex and hard to implement until Docker came along and made them easy. WebAssembly is powering a third wave of cloud computing, but Docker and the container ecosystem are evolving to work with WebAssembly, and the book has an entire chapter dedicated to Docker and WebAssembly.

    2: Docker and container-related standards and projects

    This chapter introduces you to Docker and some of the most important standards and projects shaping the container ecosystem. The goal is to lay some foundations that we’ll build on in later chapters.

    This chapter has two main parts:

    Docker

    Container-related standards and projects

    Docker

    Docker is at the heart of the container ecosystem. However, the term Docker can mean two things:

    The Docker platform

    Docker, Inc.

    The Docker platform is a neatly packaged collection of technologies for creating, managing, and orchestrating containers. Docker, Inc. is the company that created the Docker platform and continues to be the driving force behind developing new features.

    Let’s dive a bit deeper.

    Docker, Inc.

    Docker, Inc. is a technology company based out of Palo Alto and founded by French-born American developer and entrepreneur Solomon Hykes. Solomon is no longer at the company.

    The company started as a platform as a service (PaaS) provider called dotCloud. Behind the scenes, dotCloud delivered their services on top of containers and had an in-house tool to help them deploy and manage those containers. They called this in-house tool Docker.

    The word Docker is a British expression meaning dock worker that refers to a person who loads and unloads cargo from ships.

    In 2013, dotCloud dropped the struggling PaaS side of the business, rebranded as Docker, Inc., and focussed on bringing Docker and containers to the world.

    The Docker technology

    The Docker platform is designed to make it as easy as possible to build, ship, and run containers.

    At a high level, there are two major parts to the Docker platform:

    The CLI (client)

    The engine (server)

    The CLI is the familiar docker command-line tool for deploying and managing containers. It converts simple commands into API requests and sends them to the engine.

    The engine comprises all the server-side components that run and manage containers.

    Figure 2.1 shows the high-level architecture. The client and engine can be on the same host or connected over the network.

    Figure 2.1 Docker client and engine.

    Figure 2.1 Docker client and engine.

    In later chapters, you’ll see that the client and engine are complex and comprise a lot of small specialized parts. Figure 2.2 gives you an idea of some of the complexity behind the engine. However, the client hides all this complexity so you don’t have to care. For example, you type friendly docker commands into the CLI, the CLI converts them to API requests and sends them to the daemon, and the daemon takes care of everything else.

    Figure 2.2 Docker CLI and daemon hiding complexity.

    Figure 2.2 Docker CLI and daemon hiding complexity.

    Let’s switch focus and briefly look at some standards and governance bodies.

    Container-related standards and projects

    There are several important standards and governance bodies influencing the development of containers and the container ecosystem. Some of these include:

    The OCI

    The CNCF

    The Moby Project

    The Open Container Initiative (OCI)

    The Open Container Initiative (OCI) is a governance council responsible for low-level container-related standards.

    It operates under the umbrella of the Linux Foundation and was founded in the early days of the container ecosystem when some of the people at a company called CoreOS didn’t like the way Docker was dominating the ecosystem. In response, CoreOS created an open standard called appc that defined specifications for things such as image format and container runtime. They also created a reference implementation called rkt (pronounced rocket).

    The appc standard did things differently from Docker and put the ecosystem in an awkward position with two competing standards.

    While competition is usually a good thing, competing standards are generally bad, as they generate confusion that slows down user adoption. Fortunately, the main players in the ecosystem came together and formed the OCI as a vendor-neutral lightweight council to govern container standards. This allowed us to archive the appc project and place all low-level container-related specifications under the OCI’s governance.

    At the time of writing, the OCI maintains three standards called

    Enjoying the preview?
    Page 1 of 1