Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Tech Talk: Developing APIs the Easy Way – Streamline your API process with an endpoint-focused approach on Dec 5 at 11 am EST! Register now

Back to blog
API GATEWAY

Deploying Envoy as an API Gateway for Microservices

Jake Beck
November 8, 2017 | 8 min read
Deploying Envoy as an API Gateway for Microservices

An API Gateway is a façade that sits between the consumers and producers of an API. Cross-cutting functionality such as authentication, monitoring, and traffic management is implemented in your API Gateway so that your services can remain unaware of these details. In addition, when multiple services are responsible for different APIs (e.g., in a microservices architecture), an API Gateway hides this abstraction detail from the consumer.

There are dozens of different options for API Gateways, depending on your requirements. The Amazon API Gateway is a hosted Gateway that runs in Amazon. You could build your own on top of a Layer 7 proxy such as Traefik, NGINX, HAProxy, or Envoy. These all have their various strengths and weaknesses.

In general, though, you want to pick an API gateway that can accelerate your development workflow. Traditional API gateways focus on the challenges of API management, so using an Kubernetes API gateway that enables rapid development of services is essential.

Here at Datawire, we've been using Envoy for microservices. Envoy is interesting because, in addition to providing the reverse proxy semantics you need to implement an API Gateway, it also supports the features you need for distributed architectures (in fact, the Istio project builds on Envoy to provide a full-blown services mesh).

So let's take a closer look at deploying Envoy as a full-fledged, self-service API gateway. If you've been following along with our Envoy tutorial so far, we've done the following:

  1. Create a Docker container based on the official Docker image
  2. Crafted an Envoy configuration file
  3. Deployed Envoy with the appropriate configuration file

Introducing Edge Stack API Gateway

This approach starts to get cumbersome as you add complexity to your deployment. For example, every configuration change requires editing a (complex!) configuration file, and redeploying Envoy. And, we've glossed over the operational aspects of keeping multiple Envoy instances running for scalability and availability.

We thought there would be an easier way, so we wrote Edge Stack. Here's what Edge Stack API Gateway does:

  • Makes it easy to change and add to your Envoy configuration via Kubernetes annotations
  • Adds the out-of-the-box configuration necessary for production Envoy, e.g., monitoring, health/liveness checks, and more
  • Extends Envoy with traditional API Gateway functionality such as authentication
  • Integrates with Istio, for organizations who need a full-blown service mesh

Edge Stack is a Kubernetes-native API Gateway. This means that it delegates all the hard parts of scaling and availability to Kubernetes. Want to upgrade Edge Stack with no downtime? No problem -- just use a Kubernetes rolling update.

Setting Up

We're going to assume that your basic infrastructure is set up enough that you have a Kubernetes cluster running in your cloud environment of choice. For now, we assume that:

  • You have
    kubectl
    correctly talking to a Kubernetes cluster running in EC2 or GKE.
    • This is probably obvious, but it's tough to work with a Kubernetes cluster if you can't talk to it with
      kubectl
      .
  • You have
    docker
    installed and working.
    • Since we'll be building Docker images, we need a working
      docker
      command.
  • You have credentials to push Docker images to either Docker Hub or the Google Container Registry (gcr.io).

That last point is worth a little more discussion. To run something in Kubernetes, we have to be able to pull an Docker image from somewhere that the cluster can reach. When using Minikube, this is no problem, since Minikube runs its own Docker daemon: by definition, anything in the Minikube cluster can talk to that Docker daemon. However, things are different once GKE or EC2 come into play: they can't talk to a Docker daemon on your laptop without heroic measures, so you'll need to explicitly push images somewhere accessible.

The Edge Stack service

Edge Stack is deployed as a Kubernetes service. The following configuration will create a service for Edge Stack. It also adds a mapping that will route traffic from

/httpbin/
to the public httpbin.org service.

--- apiVersion: v1 kind: Service metadata: labels: service: ambassador name: ambassador annotations: getambassador.io/config: | --- apiVersion: ambassador/v0 kind: Mapping name: httpbin_mapping prefix: /httpbin/ service: httpbin.org:80 host_rewrite: httpbin.org spec: type: LoadBalancer ports: - name: ambassador port: 80 targetPort: 80 selector: service: ambassador

By using Kubernetes annotations, Edge Stack integrates transparently into your existing Kubernetes deployment workflow, so tools such Kubernetes deploy work naturally with Edge Stack.

Save the above YAML into a file called

ambassador-service.yaml
, and type
kubectl apply -f ambassador-service.yaml
to deploy the service.

Starting Edge Stack

We have an Edge Stack service, but we don't actually have Edge Stack running. To do this, we'll need a Kubernetes deployment. If you're using a cluster with RBAC enabled, you'll need to use:

kubectl apply -f https://getambassador.io/yaml/ambassador/ambassador-rbac.yaml

Without RBAC, you can use:

kubectl apply -f https://getambassador.io/yaml/ambassador/ambassador-no-rbac.yaml

Once that's done, you should see three pods for Ambassador:

$ kubectl get pods
NAME READY STATUS RESTARTS AGE
ambassador-1378270275-51qns 2/2 Running 0 2m
ambassador-1378270275-kr361 2/2 Running 0 2m
ambassador-1378270275-v10x8 2/2 Running 0 2m
$ kubectl get services
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ambassador 109.168.235.177 a1128c0831f9e... 80/TCP 3m
kubernetes 109.164.0.1 <none> 443/TCP 9m

Edge Stack relies on Kubernetes for replication, availability, and scaling -- making it itself very simple. It also relies on Kubernetes for persistence, so Edge Stack has no database.

Setting
AMBASSADORURL

In order to get access to your microservices through Edge Stack, you'll need an external URL to Edge Stack's service interface. We'll use

$AMBASSADORURL
as shorthand for the base URL of Ambassador.

We'll need to start by getting the external IP address. You can get the IP address by running

kubectl describe service ambassador
and looking at the
LoadBalancer Ingress
line. (On Minikube, you'll need to use
minikube service --url ambassador
.) Set the value of AMBASSADORURL to this address, e.g.,:

export AMBASSADORURL=109.34.24.5

In any case, do not include a trailing 

/
in
$AMBASSADORURL
, or the examples in this document won't work.

Testing Edge Stack

We can now speak to the

httpbin
service using Edge Stack:

$ curl $AMBASSADORURL/httpbin/ip/

This will send a request to Ambassador, which then routes the request to the

httpbin
service.

Diagnostics

Under the hood, Edge Stack relies on Envoy (and its powerful feature set) for routing, TLS, and the like. Includes diagnostics that gives more insight into the Envoy configuration that Edge Stack is managing. These diagnostics are not publicly exposed by default.

You can access the diagnostics by getting the list of Edge Stack pods:

$ kubectl get pods
NAME READY STATUS RESTARTS AGE
ambassador-1378270275-51qns 2/2 Running 0 2m
ambassador-1378270275-kr361 2/2 Running 0 2m
ambassador-1378270275-v10x8 2/2 Running 0 2m

forwarding to port 8877 on one of the pods:

kubectl port-forward ambassador-1378270275-51qns 8877

And then visiting http://localhost:8877 in your web browser.

Up Next

In this article, we've shown how you can deploy Envoy as an API Gateway using Ambassador. Envoy has many powerful features such as sophisticated load balancing algorithm, advanced statistics monitoring, and more. Ambassador exposes many of these features through annotations to support use cases. To learn more, read about Canary deployments or see how to use Prometheus to monitor Envoy and Edge Stack.