Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion docs/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ focus on solving mundane but difficult tasks such as:

- [Deploying a container](./install/getting-started-knative-app.md)
- [Routing and managing traffic with blue/green deployment](./serving/samples/blue-green-deployment.md)
- [Scaling automatically and sizing workloads based on demand](./serving/samples/autoscale-go/)
- [Scaling automatically and sizing workloads based on demand](./serving/configuring-the-autoscaler.md)
- [Binding running services to eventing ecosystems](./eventing/samples/kubernetes-event-source/)

Developers on Knative can use familiar idioms, languages, and frameworks to
Expand Down
8 changes: 6 additions & 2 deletions docs/eventing/broker-trigger.md
Original file line number Diff line number Diff line change
Expand Up @@ -119,26 +119,30 @@ kubectl -n default get broker default
#### Manual Setup

In order to setup a `Broker` manually, we must first create the required
`ServiceAccount` and give it the proper RBAC permissions. This setup is required
`ServiceAccount`s and give them the proper RBAC permissions. This setup is required
once per namespace. These instructions will use the `default` namespace, but you
can replace it with any namespace you want to install a `Broker` into.

Create the `ServiceAccount`.

```shell
kubectl -n default create serviceaccount eventing-broker-ingress
kubectl -n default create serviceaccount eventing-broker-filter
```

Then give it the needed RBAC permissions:

```shell
kubectl -n default create rolebinding eventing-broker-ingress \
--clusterrole=eventing-broker-ingress \
--user=eventing-broker-ingress
kubectl -n default create rolebinding eventing-broker-filter \
--clusterrole=eventing-broker-filter \
--serviceaccount=default:eventing-broker-filter
```

Note that the previous commands uses three different objects, all named
`eventing-broker-filter`. The `ClusterRole` is installed with Knative Eventing
`eventing-broker-ingress` or `eventing-broker-filter`. The `ClusterRole` is installed with Knative Eventing
[here](../../config/200-broker-clusterrole.yaml). The `ServiceAccount` was
created two commands prior. The `RoleBinding` is created with this command.

Expand Down
107 changes: 55 additions & 52 deletions docs/install/getting-started-knative-app.md
Original file line number Diff line number Diff line change
Expand Up @@ -93,83 +93,86 @@ assigned an external IP address.

1. To find the IP address for your service, enter:

```shell
# In Knative 0.2.x and prior versions, the `knative-ingressgateway` service was used instead of `istio-ingressgateway`.
INGRESSGATEWAY=knative-ingressgateway
```shell
# In Knative 0.2.x and prior versions, the `knative-ingressgateway` service was used instead of `istio-ingressgateway`.
INGRESSGATEWAY=knative-ingressgateway

# The use of `knative-ingressgateway` is deprecated in Knative v0.3.x.
# Use `istio-ingressgateway` instead, since `knative-ingressgateway`
# will be removed in Knative v0.4.
if kubectl get configmap config-istio -n knative-serving &> /dev/null; then
INGRESSGATEWAY=istio-ingressgateway
fi
# The use of `knative-ingressgateway` is deprecated in Knative v0.3.x.
# Use `istio-ingressgateway` instead, since `knative-ingressgateway`
# will be removed in Knative v0.4.
if kubectl get configmap config-istio -n knative-serving &> /dev/null; then
INGRESSGATEWAY=istio-ingressgateway
fi

kubectl get svc $INGRESSGATEWAY --namespace istio-system
kubectl get svc $INGRESSGATEWAY --namespace istio-system

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
istio-ingressgateway LoadBalancer 10.23.247.74 35.203.155.229 80:32380/TCP,443:32390/TCP,32400:32400/TCP 2d
```
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
istio-ingressgateway LoadBalancer 10.23.247.74 35.203.155.229 80:32380/TCP,443:32390/TCP,32400:32400/TCP 2d
```

Take note of the `EXTERNAL-IP` address.
Take note of the `EXTERNAL-IP` address.

You can also export the IP address as a variable with the following command:
You can also export the IP address as a variable with the following command:

```shell
export IP_ADDRESS=$(kubectl get svc $INGRESSGATEWAY --namespace istio-system --output 'jsonpath={.status.loadBalancer.ingress[0].ip}')
```
```shell
export IP_ADDRESS=$(kubectl get svc $INGRESSGATEWAY --namespace istio-system --output 'jsonpath={.status.loadBalancer.ingress[0].ip}')
```

> Note: if you use minikube or a baremetal cluster that has no external load
> balancer, the `EXTERNAL-IP` field is shown as `<pending>`. You need to use
> `NodeIP` and `NodePort` to interact your app instead. To get your app's
> `NodeIP` and `NodePort`, enter the following command:
> Note: If you use minikube or a baremetal cluster that has no external load
> balancer, the `EXTERNAL-IP` field is shown as `<pending>`. You need to use
> `NodeIP` and `NodePort` to interact your app instead. To get your app's
> `NodeIP` and `NodePort`, enter the following command:

```shell
export IP_ADDRESS=$(kubectl get node --output 'jsonpath={.items[0].status.addresses[0].address}'):$(kubectl get svc $INGRESSGATEWAY --namespace istio-system --output 'jsonpath={.spec.ports[?(@.port==80)].nodePort}')
```
```shell
export IP_ADDRESS=$(kubectl get node --output 'jsonpath={.items[0].status.addresses[0].address}'):$(kubectl get svc $INGRESSGATEWAY --namespace istio-system --output 'jsonpath={.spec.ports[?(@.port==80)].nodePort}')
```

1. To find the host URL for your service, enter:

```shell
kubectl get route helloworld-go --output=custom-columns=NAME:.metadata.name,DOMAIN:.status.domain
NAME DOMAIN
helloworld-go helloworld-go.default.example.com
```
```shell
kubectl get route helloworld-go --output=custom-columns=NAME:.metadata.name,DOMAIN:.status.domain
NAME DOMAIN
helloworld-go helloworld-go.default.example.com
```

You can also export the host URL as a variable using the following command:
> Note: By default, Knative uses the `example.com` domain.
> To configure a custom DNS domain, see [Using a Custom Domain](../serving/using-a-custom-domain.md).

```shell
export HOST_URL=$(kubectl get route helloworld-go --output jsonpath='{.status.domain}')
```
You can also export the host URL as a variable using the following command:

If you changed the name from `helloworld-go` to something else when creating
the `.yaml` file, replace `helloworld-go` in the above commands with the name
you entered.
```shell
export HOST_URL=$(kubectl get route helloworld-go --output jsonpath='{.status.domain}')
```

If you changed the name from `helloworld-go` to something else when creating
the `.yaml` file, replace `helloworld-go` in the above commands with the name
you entered.

1. Now you can make a request to your app and see the results. Replace
`IP_ADDRESS` with the `EXTERNAL-IP` you wrote down, and replace
`helloworld-go.default.example.com` with the domain returned in the previous
step.

```shell
curl -H "Host: helloworld-go.default.example.com" http://${IP_ADDRESS}
Hello World: Go Sample v1!
```
```shell
curl -H "Host: helloworld-go.default.example.com" http://${IP_ADDRESS}
Hello World: Go Sample v1!
```

If you exported the host URL and IP address as variables in the previous
steps, you can use those variables to simplify your cURL request:
If you exported the host URL and IP address as variables in the previous
steps, you can use those variables to simplify your cURL request:

```shell
curl -H "Host: ${HOST_URL}" http://${IP_ADDRESS}
Hello World: Go Sample v1!
```
```shell
curl -H "Host: ${HOST_URL}" http://${IP_ADDRESS}
Hello World: Go Sample v1!
```

If you deployed your own app, you might want to customize this cURL request
to interact with your application.
If you deployed your own app, you might want to customize this cURL request
to interact with your application.

It can take a few seconds for Knative to scale up your application and return
a response.
It can take a few seconds for Knative to scale up your application and return
a response.

> Note: Add `-v` option to get more detail if the `curl` command failed.
> Note: Add `-v` option to get more detail if the `curl` command failed.

You've successfully deployed your first application using Knative!

Expand Down
16 changes: 8 additions & 8 deletions docs/install/installing-istio.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ installation you want, read through the options and choose the installation that
suits your needs.

You can easily customize your Istio installation with `helm`. The below sections
cover a few useful Istio configurations and their benefits.
cover a few useful Istio configurations and their benefits.

### Choosing an Istio installation

Expand Down Expand Up @@ -172,7 +172,7 @@ Install Istio with [Secret Discovery Service (SDS)][3] to enable a few additiona
configurations for the gateway TLS. This will allow you to:

- Dynamically update the gateway TLS with multiple TLS certificates to terminate
TLS connections.
TLS connections.

- Use [Auto TLS](../serving/using-auto-tls.md).

Expand All @@ -182,7 +182,7 @@ The below `helm` flag is needed in your `helm` command to enable `SDS`:
--set gateways.istio-ingressgateway.sds.enabled=true
```

Enter the following command to install Istio with ingress `SDS` and
Enter the following command to install Istio with ingress `SDS` and
automatic sidecar injection:

```shell
Expand Down Expand Up @@ -216,11 +216,11 @@ helm template --namespace=istio-system \

### Updating your install to use cluster local gateway

If you want your Routes to be visible only inside the cluster, you may
want to enable [cluster local routes](../docs/serving/cluster-local-route.md).
To use this feature, add an extra Istio cluster local gateway to your cluster.
Enter the following command to add the cluster local gateway to an existing
Istio installation:
If you want your Routes to be visible only inside the cluster, you may want to
enable [cluster local routes](../../docs/serving/cluster-local-route.md). To use
this feature, add an extra Istio cluster local gateway to your cluster. Enter
the following command to add the cluster local gateway to an existing Istio
installation:

```shell
# Add the extra gateway.
Expand Down
1 change: 1 addition & 0 deletions docs/serving/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,6 +34,7 @@ serverless workload behaves on the cluster:
The `revision.serving.knative.dev` resource is a point-in-time snapshot of the
code and configuration for each modification made to the workload. Revisions
are immutable objects and can be retained for as long as useful.
Knative Serving Revisions can be automatically scaled up and down according to incoming traffic. See [Configuring the Autoscaler](./configuring-the-autoscaler.md) for more information.

![Diagram that displays how the Serving resources coordinate with each other.](https://github.com/knative/serving/raw/master/docs/spec/images/object_model.png)

Expand Down
117 changes: 117 additions & 0 deletions docs/serving/configuring-the-autoscaler.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,117 @@
---
title: "Configuring the Autoscaler"
weight: 10
type: "docs"
---

Since Knative v0.2, per revision autoscalers have been replaced by a single shared autoscaler. This is, by default, the Knative Pod Autoscaler (KPA), which provides fast, request-based autoscaling capabilities out of the box.

## Configuring Knative Pod Autoscaler

To modify the autoscaler configuration, you must modify a Kubernetes ConfigMap called `config-autoscaler` in the `knative-serving` namespace.

You can view the default contents of this ConfigMap using the following command.

`kubectl -n knative-serving get cm config-autoscaler`

### Example of default ConfigMap

```
apiVersion: v1
kind: ConfigMap
metadata:
name: config-autoscaler
namespace: knative-serving
data:
container-concurrency-target-default: 100
container-concurrency-target-percentage: 1.0
enable-scale-to-zero: true
enable-vertical-pod-autoscaling: false
max-scale-up-rate: 10
panic-window: 6s
scale-to-zero-grace-period: 30s
stable-window: 60s
tick-interval: 2s
```

## Configuring scale to zero

To correctly configure autoscaling to zero for revisions, you must modify the following parameters in the ConfigMap.

### scale-to-zero-grace-period

` scale-to-zero-grace-period` specifies the time an inactive revision is left running before it is scaled to zero (min: 30s).

```
scale-to-zero-grace-period: 30s
```

### stable-window

When operating in a stable mode, the autoscaler operates on the average concurrency over the stable window.
```
stable-window: 60s
```

`stable-window` can also be configured in the Revision template as an annotation.

```
autoscaling.knative.dev/window: 60s
```

### enable-scale-to-zero

Ensure that enable-scale-to-zero is set to `true`.

### Termination period

The termination period is the time that the pod takes to shut down after the last request is finished. The termination period of the pod is equal to the sum of the values of the `stable-window` and `scale-to-zero-grace-period` parameters. In the case of this example, the termination period would be 90s.

## Configuring concurrency

Concurrency for autoscaling can be configured using the following methods.

### target

` target` defines how many concurrent requests are wanted at a given time (soft limit) and is the recommended configuration for autoscaling in Knative.

The default value for concurrency target is specified in the ConfigMap as `100`.
```
`container-concurrency-target-default: 100`
```
This value can be configured by adding or modifying the `autoscaling.knative.dev/target` annotation value in the Revision template.

```
autoscaling.knative.dev/target: 50
```

### containerConcurrency

**NOTE:** `containerConcurrency` should only be used if there is a clear need to limit how many requests reach the app at a given time. Using `containerConcurrency` is only advised if the application needs to have an enforced constraint of concurrency.

`containerConcurrency` limits the amount of concurrent requests are allowed into the application at a given time (hard limit), and is configured in the Revision template.

```
containerConcurrency: 0 | 1 | 2-N
```
- A `containerConcurrency` value of `1` will guarantee that only one request is handled at a time by a given instance of the Revision container.
- A value of `2` or more will limit request concurrency to that value.
- A value of `0` means the system should decide.

If there is no `/target` annotation, the autoscaler is configured as if `/target` == `containerConcurrency`.

## Configuring CPU-based autoscaling

**NOTE:** You can configure Knative autoscaling to work with either the default KPA or a CPU based metric, i.e. Horizontal Pod Autoscaler (HPA), however scale-to-zero capabilities are only supported for KPA.

You can configure Knative to use CPU based autoscaling instead of the default request based metric by adding or modifying the `autoscaling.knative.dev/class` and `autoscaling.knative.dev/metric` values as annotations in the Revision template.

```
autoscaling.knative.dev/metric: cpu
autoscaling.knative.dev/class: hpa.autoscaling.knative.dev
```

## Additional resources

- [Go autoscaling sample](https://knative.dev/docs/serving/samples/autoscale-go/index.html)
- [Knative v0.3 Autoscaling  - A Love Story blog post](https://medium.com/knative/knative-v0-3-autoscaling-a-love-story-d6954279a67a)
4 changes: 2 additions & 2 deletions docs/serving/samples/grpc-ping-go/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@ A simple gRPC server written in Go that you can use for testing.

## Prerequisites

- [Install the Knative Serving version 0.4 or later](../../../install/README.md).
- [Install the latest version of Knative Serving](../../../install/README.md).

- Install [docker](https://www.docker.com/).

Expand All @@ -22,7 +22,7 @@ First, build and publish the gRPC server to DockerHub (replacing `{username}`):
docker build \
--tag "docker.io/{username}/grpc-ping-go" \
--file=docs/serving/samples/grpc-ping-go/Dockerfile .
docker push "${REPO}/docs/serving/samples/grpc-ping-go"
docker push "docker.io/{username}/grpc-ping-go"
```

Next, replace `{username}` in `sample.yaml` with your DockerHub username, and
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
Dockerfile
README.md
**/obj/
**/bin/
9 changes: 9 additions & 0 deletions docs/serving/samples/hello-world/helloworld-csharp/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -87,6 +87,15 @@ following commands:
CMD ["dotnet", "out/helloworld-csharp.dll"]
```

1. Create a `.dockerignore` file to ensure that any files related to a local build do not affect the container that you build for deployment.

```ignore
Dockerfile
README.md
**/obj/
**/bin/
```

1. Create a new file, `service.yaml` and copy the following service definition
into the file. Make sure to replace `{username}` with your Docker Hub
username.
Expand Down
2 changes: 1 addition & 1 deletion docs/serving/samples/hello-world/helloworld-go/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ following commands:

```shell
git clone -b "release-0.6" https://github.com/knative/docs knative-docs
cd knative-docs/serving/samples/hello-world/helloworld-go
cd knative-docs/docs/serving/samples/hello-world/helloworld-go
```

## Before you begin
Expand Down
Loading