Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion docs/install/install.rst
Original file line number Diff line number Diff line change
Expand Up @@ -10,5 +10,5 @@ Building and installation
building
installation
ref_configs
sandboxes
sandboxes/sandboxes.rst
tools
Original file line number Diff line number Diff line change
@@ -1,14 +1,7 @@
.. _install_sandboxes:

Sandboxes
=========

The docker-compose sandboxes give you different environments to test out Envoy's
features. As we gauge people's interests we will add more sandboxes demonstrating
different features.
.. _install_sandboxes_front_proxy:

Front Proxy
-----------
===========

To get a flavor of what Envoy has to offer as a front proxy, we are releasing a
`docker compose <https://docs.docker.com/compose/>`_ sandbox that deploys a front
Expand Down Expand Up @@ -233,88 +226,3 @@ statistics. For example inside ``frontenvoy`` we can get::
Notice that we can get the number of members of upstream clusters, number of requests
fulfilled by them, information about http ingress, and a plethora of other useful
stats.

gRPC bridge
-----------

Envoy gRPC
~~~~~~~~~~

The gRPC bridge sandbox is an example usage of Envoy's
:ref:`gRPC bridge filter <config_http_filters_grpc_bridge>`.
Included in the sandbox is a gRPC in memory Key/Value store with a Python HTTP
client. The Python client makes HTTP/1 requests through the Envoy sidecar
process which are upgraded into HTTP/2 gRPC requests. Response trailers are then
buffered and sent back to the client as a HTTP/1 header payload.

Another Envoy feature demonstrated in this example is Envoy's ability to do authority
base routing via its route configuration.

Building the Go service
~~~~~~~~~~~~~~~~~~~~~~~

To build the Go gRPC service run::

$ pwd
~/src/envoy/examples/grpc-bridge
$ script/bootstrap
$ script/build

Docker compose
~~~~~~~~~~~~~~

To run the docker compose file, and set up both the Python and the gRPC containers
run::

$ pwd
~/src/envoy/examples/grpc-bridge
$ docker-compose up --build

Sending requests to the Key/Value store
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

To use the python service and sent gRPC requests::

$ pwd
~/src/envoy/examples/grpc-bridge
# set a key
$ docker-compose exec python /client/client.py set foo bar
setf foo to bar

# get a key
$ docker-compose exec python /client/client.py get foo
bar

Locally building a docker image with an envoy binary
----------------------------------------------------

The following steps guide you through building your own envoy binary, and
putting that in a clean ubuntu container.

**Step 1: Build Envoy**

Using ``lyft/envoy-build`` you will compile envoy.
This image has all software needed to build envoy. From your envoy directory::

$ pwd
src/envoy
$ ./ci/run_envoy_docker.sh './ci/do_ci.sh bazel.dev'

That command will take some time to run because it is compiling an envoy binary and running tests.

For more information on building and different build targets, please refer to :repo:`ci/README.md`.

**Step 2: Build image with only envoy binary**

In this step we'll build an image that only has the envoy binary, and none
of the software used to build it.::

$ pwd
src/envoy/
$ docker build -f ci/Dockerfile-envoy-image -t envoy .

Now you can use this ``envoy`` image to build the any of the sandboxes if you change
the ``FROM`` line in any dockerfile.

This will be particularly useful if you are interested in modifying envoy, and testing
your changes.
52 changes: 52 additions & 0 deletions docs/install/sandboxes/grpc_bridge.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,52 @@
.. _install_sandboxes_grpc_bridge:

gRPC Bridge
===========

Envoy gRPC
~~~~~~~~~~

The gRPC bridge sandbox is an example usage of Envoy's
:ref:`gRPC bridge filter <config_http_filters_grpc_bridge>`.
Included in the sandbox is a gRPC in-memory Key/Value store with a Python HTTP
client. The Python client makes HTTP/1 requests through the Envoy sidecar
process which are upgraded into HTTP/2 gRPC requests. Response trailers are then
buffered and sent back to the client as a HTTP/1 header payload.

Another Envoy feature demonstrated in this example is Envoy's ability to do authority
base routing via its route configuration.

Building the Go service
~~~~~~~~~~~~~~~~~~~~~~~

To build the Go gRPC service run::

$ pwd
~/src/envoy/examples/grpc-bridge
$ script/bootstrap
$ script/build

Docker compose
~~~~~~~~~~~~~~

To run the docker compose file, and set up both the Python and the gRPC containers
run::

$ pwd
~/src/envoy/examples/grpc-bridge
$ docker-compose up --build

Sending requests to the Key/Value store
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

To use the Python service and send gRPC requests::

$ pwd
~/src/envoy/examples/grpc-bridge
# set a key
$ docker-compose exec python /client/client.py set foo bar
setf foo to bar

# get a key
$ docker-compose exec python /client/client.py get foo
bar
35 changes: 35 additions & 0 deletions docs/install/sandboxes/local_docker_build.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
.. _install_sandboxes_local_docker_build:

Building an Envoy Docker image
==============================

The following steps guide you through building your own Envoy binary, and
putting that in a clean Ubuntu container.

**Step 1: Build Envoy**

Using ``lyft/envoy-build`` you will compile Envoy.
This image has all software needed to build Envoy. From your Envoy directory::

$ pwd
src/envoy
$ ./ci/run_envoy_docker.sh './ci/do_ci.sh bazel.release'

That command will take some time to run because it is compiling an Envoy binary and running tests.

For more information on building and different build targets, please refer to :repo:`ci/README.md`.

**Step 2: Build image with only envoy binary**

In this step we'll build an image that only has the Envoy binary, and none
of the software used to build it.::

$ pwd
src/envoy/
$ docker build -f ci/Dockerfile-envoy-image -t envoy .
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seems to be dupe of the above file suffix, can you de-dupe?

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh shux.. good catch.. removed that stuff from grpc_bridge.rst


Now you can use this ``envoy`` image to build the any of the sandboxes if you change
the ``FROM`` line in any Dockerfile.

This will be particularly useful if you are interested in modifying Envoy, and testing
your changes.
16 changes: 16 additions & 0 deletions docs/install/sandboxes/sandboxes.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
.. _install_sandboxes:

Sandboxes
=========

The docker-compose sandboxes give you different environments to test out Envoy's
features. As we gauge people's interests we will add more sandboxes demonstrating
different features. The following sandboxes are available:

.. toctree::
:maxdepth: 1

front_proxy
zipkin_tracing
grpc_bridge
local_docker_build
82 changes: 82 additions & 0 deletions docs/install/sandboxes/zipkin_tracing.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,82 @@
.. _install_sandboxes_zipkin_tracing:

Zipkin Tracing
==============

The Zipkin tracing sandbox demonstrates Envoy's :ref:`request tracing <arch_overview_tracing>`
capabilities using `Zipkin <http://zipkin.io/>`_ as the tracing provider. This sandbox
is very similar to the front proxy architecture described above, with one difference:
service1 makes an API call to service2 before returning a response.
The three containers will be deployed inside a virtual network called ``envoymesh``.

All incoming requests are routed via the front envoy, which is acting as a reverse proxy
sitting on the edge of the ``envoymesh`` network. Port ``80`` is mapped to port ``8000``
by docker compose (see :repo:`/examples/zipkin-tracing/docker-compose.yml`). Notice that
all envoys are configured to collect request traces (e.g., http_connection_manager/config/tracing setup in
:repo:`/examples/zipkin-tracing/front-envoy-zipkin.json`) and setup to propagate the spans generated
by the Zipkin tracer to a Zipkin cluster (trace driver setup
in :repo:`/examples/zipkin-tracing/front-envoy-zipkin.json`).

Before routing a request to the appropriate service envoy or the application, Envoy will take
care of generating the appropriate spans for tracing (parent/child/shared context spans).
At a high-level, each span records the latency of upstream API calls as well as information
needed to correlate the span with other related spans (e.g., the trace ID).

One of the most important benefits of tracing from Envoy is that it will take care of
propagating the traces to the Zipkin service cluster. However, in order to fully take advantage
of tracing, the application has to propagate trace headers that Envoy generates, while making
calls to other services. In the sandbox we have provided, the simple flask app
(see trace function in :repo:`/examples/front-proxy/service.py`) acting as service1 propagates
the trace headers while making an outbound call to service2.


Running the Sandbox
~~~~~~~~~~~~~~~~~~~

The following documentation runs through the setup of an envoy cluster organized
as is described in the image above.

**Step 1: Build the sandbox**

To build this sandbox example, and start the example apps run the following commands::

$ pwd
envoy/examples/zipkin-tracing
$ docker-compose up --build -d
$ docker-compose ps
Name Command State Ports
-------------------------------------------------------------------------------------------------------------
zipkintracing_service1_1 /bin/sh -c /usr/local/bin/ ... Up 80/tcp
zipkintracing_service2_1 /bin/sh -c /usr/local/bin/ ... Up 80/tcp
zipkintracing_front-envoy_1 /bin/sh -c /usr/local/bin/ ... Up 0.0.0.0:8000->80/tcp, 0.0.0.0:8001->8001/tcp

**Step 2: Generate some load**

You can now send a request to service1 via the front-envoy as follows::

$ curl -v $(docker-machine ip default):8000/trace/1
* Trying 192.168.99.100...
* Connected to 192.168.99.100 (192.168.99.100) port 8000 (#0)
> GET /trace/1 HTTP/1.1
> Host: 192.168.99.100:8000
> User-Agent: curl/7.43.0
> Accept: */*
>
< HTTP/1.1 200 OK
< content-type: text/html; charset=utf-8
< content-length: 89
< x-envoy-upstream-service-time: 1
< server: envoy
< date: Fri, 26 Aug 2016 19:39:19 GMT
< x-envoy-protocol-version: HTTP/1.1
<
Hello from behind Envoy (service 1)! hostname: f26027f1ce28 resolvedhostname: 172.19.0.6
* Connection #0 to host 192.168.99.100 left intact

**Step 3: View the traces in Zipkin UI**

Point your browser to http://localhost:9411 . You should see the Zipkin dashboard.
Set the service to "front-proxy" and set the start time to a few minutes before
the start of the test (step 2) and hit enter. You should see traces from the front-proxy.
Click on a trace to explore the path taken by the request from front-proxy to service1
to service2, as well as the latency incurred at each hop.
1 change: 1 addition & 0 deletions docs/operations/faq/overview.rst
Original file line number Diff line number Diff line change
Expand Up @@ -9,3 +9,4 @@ The purpose of this document is to link examples of commonly used deployment sce
:maxdepth: 1

zone_aware_routing
zipkin_tracing
7 changes: 7 additions & 0 deletions docs/operations/faq/zipkin_tracing.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
.. _common_configuration_zipkin_tracing:

Request tracing with Zipkin
===========================

Refer to the :ref:`zipkin sandbox setup <install_sandboxes_zipkin_tracing>`
for an example of zipkin tracing configuration.
3 changes: 3 additions & 0 deletions examples/BUILD
Original file line number Diff line number Diff line change
Expand Up @@ -14,5 +14,8 @@ filegroup(
"front-proxy/service-envoy.json",
"grpc-bridge/config/s2s-grpc-envoy.json",
"grpc-bridge/config/s2s-python-envoy.json",
"zipkin-tracing/front-envoy-zipkin.json",
"zipkin-tracing/service1-envoy-zipkin.json",
"zipkin-tracing/service2-envoy-zipkin.json",
],
)
2 changes: 1 addition & 1 deletion examples/front-proxy/Dockerfile-frontenvoy
Original file line number Diff line number Diff line change
Expand Up @@ -2,4 +2,4 @@ FROM lyft/envoy:latest

RUN apt-get update && apt-get -q install -y \
curl
CMD /usr/local/bin/envoy -c /etc/front-envoy.json
CMD /usr/local/bin/envoy -c /etc/front-envoy.json --service-cluster front-proxy
26 changes: 26 additions & 0 deletions examples/front-proxy/service.py
Original file line number Diff line number Diff line change
@@ -1,9 +1,21 @@
from flask import Flask
from flask import request
import socket
import os
import sys
import requests

app = Flask(__name__)

TRACE_HEADERS_TO_PROPAGATE = [
'X-Ot-Span-Context',
'X-Request-Id',
'X-B3-TraceId',
'X-B3-SpanId',
'X-B3-ParentSpanId',
'X-B3-Sampled',
'X-B3-Flags'
]

@app.route('/service/<service_number>')
def hello(service_number):
Expand All @@ -12,5 +24,19 @@ def hello(service_number):
socket.gethostname(),
socket.gethostbyname(socket.gethostname())))

@app.route('/trace/<service_number>')
def trace(service_number):
headers = {}
# call service 2 from service 1
if int(os.environ['SERVICE_NAME']) == 1 :
for header in TRACE_HEADERS_TO_PROPAGATE:
if header in request.headers:
headers[header] = request.headers[header]
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Normally in zipkin apps, the client creates a new span that is a child of the current span and changes the headers appropriately before making requests to another service.

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That is true in the normal case. The idea behind Envoy doing the tracing for you is that the application does not have to do any tracing nor does it have to spend resources propagating the traces to a collector server. It eliminates the complexity of adding additional libraries for the purpose of tracing and dealing with all the additional code.

Secondly, the idea is that Envoy's presence is transparent to the application. The amount of time spent in Envoy should be minuscule (if its sizeable, then there is something wrong with Envoy design :) ).

This is the Envoy model btw. I am just adding examples to go with the model. :).

ret = requests.get("http://localhost:9000/trace/2", headers=headers)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do you want to use service_number from within the get instead of 2?

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You mean service_number + 1 ? Because the entire code is specialized to service version == 1

return ('Hello from behind Envoy (service {})! hostname: {} resolved'
'hostname: {}\n'.format(os.environ['SERVICE_NAME'],
socket.gethostname(),
socket.gethostbyname(socket.gethostname())))

if __name__ == "__main__":
app.run(host='127.0.0.1', port=8080, debug=True)
3 changes: 2 additions & 1 deletion examples/front-proxy/start_service.sh
Original file line number Diff line number Diff line change
@@ -1,2 +1,3 @@
#!/bin/bash
python /code/service.py &
envoy -c /etc/service-envoy.json
envoy -c /etc/service-envoy.json --service-cluster service${SERVICE_NAME}
2 changes: 2 additions & 0 deletions examples/zipkin-tracing/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
To learn about this sandbox and for instructions on how to run it please head over
to the [envoy docs](https://lyft.github.io/envoy/docs/install/sandboxes.html#zipkin-tracing)
Loading