Skip to content
This repository was archived by the owner on Jun 28, 2024. It is now read-only.

test: Make aware test pod cgroup option.#1824

Merged
GabyCT merged 1 commit into
kata-containers:masterfrom
jcvenegas:sandbox_cgroup_only_true
Oct 3, 2019
Merged

test: Make aware test pod cgroup option.#1824
GabyCT merged 1 commit into
kata-containers:masterfrom
jcvenegas:sandbox_cgroup_only_true

Conversation

@jcvenegas
Copy link
Copy Markdown
Member

  • Add scripts to enable it if needed.
  • Add script to rolback config to enable it.
  • Check in docker test if option is enabled to not check cgroups in the
    host.

Depends-on: github.com/kata-containers/runtime#1880

Fixes: #1810

Signed-off-by: Jose Carlos Venegas Munoz jcvenega@jcvenega-nuc.zpn.intel.com

Comment thread .ci/enable_sandbox_cgroup_only.sh Outdated
Comment thread .ci/enable_sandbox_cgroup_only.sh Outdated
Comment thread .ci/enable_sandbox_cgroup_only.sh Outdated
Comment thread .ci/enable_sandbox_cgroup_only.sh Outdated
Comment thread integration/docker/cgroups_test.go Outdated
@jcvenegas jcvenegas force-pushed the sandbox_cgroup_only_true branch from 92a5698 to b9d958f Compare July 18, 2019 14:27
jcvenegas added a commit to jcvenegas/runtime that referenced this pull request Jul 24, 2019
* config: add option to use only pod cgroup.

Use option to use only pod cgroup.

- Get pod Cgroup getting the parent of the SANDBOX container.
- Do not create host cgroup for each container
- Place all kata processese in the cgroup
  * Proxy
  * Shim
  * Runtime ?
  * Hypervisor

This may reduce the perfomance of small pods as Hypervisor is
totally constrained (not partially anymore to vcpus).
It gives to the container administrator more acurrate data on
the resource usage.

Memory limitations: Because VMM is contrained in the pod sandox it could
be OOM so the "pod admin" should consider to add an extra memory for the
overhead.

From Kata perspective we should investigate what is the overhead of
Kata for at least memory as this is a critical resource that could
get the all the pod worloads killed if the VM get OMM. Ideally
the Guest memory is larger enough so first Container workloads running
on it get killed and there is room to allow Kata in the host works
correctly.

It may be helpful to provide the pod overhead as part of cli/other api.

$ kata-runtime overhead
Memory: 200 MB ( Proxy, VM and Shim)
This probably will be hardcoded (as config or code )
So the runtime can check the pod cgroup and decide to fail nicely
asking for more memory in the parent cgroup for correct functionality
aditionally we could have metrics test to verify that it is possible to
creat a true or bash command with that defined memory at least.

- If the test fail due to change that increase the memory it should be
updated. And ideally should be updated when this is reduced.

info: docker cgroup before

Directory /sys/fs/cgroup/cpu/docker/:
└─kata_f12efaa8d43d55904f5fad8344a640536c74276bacff8bb8b0020fd056a4d5cc
  ├─29198 /opt/kata/bin/qemu-system-x86_64 -name sandbox-f12efaa8d43d55904f5f...
  └─29221 /opt/kata/libexec/kata-containers/kata-shim -agent unix:///run/vc/s...

- This mode Use all cgroups types  in the pod sandbox

This config enables the option to use the cgroup parent
of the container with annotation of Sandbox.

To make easy debug and CI just name it, kata-sandbox

pod-cgroup
 kata-sandbox
  2487 /opt/kata/bin/qemu-system-x86_64 -name sandbox-090142d008e431ac5d4a...
  22494 /opt/kata/libexec/kata-containers/kata-proxy -listen-socket unix://...
  2654 /opt/kata/libexec/kata-containers/kata-shim -agent unix:///run/vc/s...

Depends-on: github.com/kata-containers/tests#1824

Fixes: kata-containers#1879
Signed-off-by: Carlos Venegas <vmjcarlos@gmail.com>
@jcvenegas jcvenegas force-pushed the sandbox_cgroup_only_true branch from b9d958f to 94b106a Compare July 25, 2019 14:14
jcvenegas added a commit to jcvenegas/runtime that referenced this pull request Jul 26, 2019
* config: add option to use only pod cgroup.

Use option to use only pod cgroup.

- Get pod Cgroup getting the parent of the SANDBOX container.
- Do not create host cgroup for each container
- Place all kata processese in the cgroup
  * Proxy
  * Shim
  * Runtime ?
  * Hypervisor

This may reduce the perfomance of small pods as Hypervisor is
totally constrained (not partially anymore to vcpus).
It gives to the container administrator more acurrate data on
the resource usage.

Memory limitations: Because VMM is contrained in the pod sandox it could
be OOM so the "pod admin" should consider to add an extra memory for the
overhead.

From Kata perspective we should investigate what is the overhead of
Kata for at least memory as this is a critical resource that could
get the all the pod worloads killed if the VM get OMM. Ideally
the Guest memory is larger enough so first Container workloads running
on it get killed and there is room to allow Kata in the host works
correctly.

It may be helpful to provide the pod overhead as part of cli/other api.

$ kata-runtime overhead
Memory: 200 MB ( Proxy, VM and Shim)
This probably will be hardcoded (as config or code )
So the runtime can check the pod cgroup and decide to fail nicely
asking for more memory in the parent cgroup for correct functionality
aditionally we could have metrics test to verify that it is possible to
creat a true or bash command with that defined memory at least.

- If the test fail due to change that increase the memory it should be
updated. And ideally should be updated when this is reduced.

info: docker cgroup before

Directory /sys/fs/cgroup/cpu/docker/:
└─kata_f12efaa8d43d55904f5fad8344a640536c74276bacff8bb8b0020fd056a4d5cc
  ├─29198 /opt/kata/bin/qemu-system-x86_64 -name sandbox-f12efaa8d43d55904f5f...
  └─29221 /opt/kata/libexec/kata-containers/kata-shim -agent unix:///run/vc/s...

- This mode Use all cgroups types  in the pod sandbox

This config enables the option to use the cgroup parent
of the container with annotation of Sandbox.

To make easy debug and CI just name it, kata-sandbox

pod-cgroup
 kata-sandbox
  2487 /opt/kata/bin/qemu-system-x86_64 -name sandbox-090142d008e431ac5d4a...
  22494 /opt/kata/libexec/kata-containers/kata-proxy -listen-socket unix://...
  2654 /opt/kata/libexec/kata-containers/kata-shim -agent unix:///run/vc/s...

Depends-on: github.com/kata-containers/tests#1824

Fixes: kata-containers#1879
Signed-off-by: Carlos Venegas <vmjcarlos@gmail.com>
jcvenegas added a commit to jcvenegas/runtime that referenced this pull request Jul 26, 2019
* config: add option to use only pod cgroup.

Use option to use only pod cgroup.

- Get pod Cgroup getting the parent of the SANDBOX container.
- Do not create host cgroup for each container
- Place all kata processese in the cgroup
  * Proxy
  * Shim
  * Runtime
  * Hypervisor

Move hypervisor and other process after they are launched does not
provide an expected behavior for resource accounting.

If a process is moved after is created the cgroup will start to account
resources that are used after moved. For better host accounting move the
runtime to the cgroup sandbox before anything is created this way all
the processes created by the runtime are accounted as expected.

limitation for very small memory cgroups the runtime may not work as
expected as the process are going to get OOM.

TODO:
A minimal pod cgroup memory should be defined and validated.

This may reduce the perfomance of small pods as Hypervisor is
totally constrained (not partially anymore to vcpus).
It gives to the container administrator more acurrate data on
the resource usage.

By checking the pod cgroup directly
	cat /sys/fs/cgroup/<type>/pod-cgroup/data

Intra conainer usage is still possible to get via

kata-runtime events

Memory limitations: Because VMM is contrained in the pod sandox it could
be OOM so the "pod admin" should consider to add an extra memory for the
overhead.

TODO:
- [ ] Provide a minimal expected kata memory usage.
From Kata perspective we should investigate what is the overhead of
Kata for at least memory as this is a critical resource that could
get the all the pod worloads killed if the VM get OMM. Ideally
the Guest memory is larger enough so first Container workloads running
on it get killed and there is room to allow Kata in the host works
correctly.

It may be helpful to provide the pod overhead as part of cli/other api.
```
$ kata-runtime overhead
This probably will be hardcoded (as config or code )
Memory: 200 MB ( Proxy, VM and Shim)
So the runtime can check the pod cgroup and decide to fail nicely
asking for more memory in the parent cgroup for correct functionality
aditionally we could have metrics test to verify that it is possible to
creat a true or bash command with that defined memory at least.
```

OR

```
Daynamic:
cgroup usage - containers inside guest usage
```

- If the test fail due to change that increase the memory it should be
updated. And ideally should be updated when this is reduced.

info: docker cgroup before

Directory /sys/fs/cgroup/cpu/docker/:
└─kata_f12efaa8d43d55904f5fad8344a640536c74276bacff8bb8b0020fd056a4d5cc
  ├─29198 /opt/kata/bin/qemu-system-x86_64 -name sandbox-f12efaa8d43d55904f5f...
  └─29221 /opt/kata/libexec/kata-containers/kata-shim -agent unix:///run/vc/s...

- This mode Use all cgroups types  in the pod sandbox

This config enables the option to use the cgroup parent
of the container with annotation of Sandbox.

To make easy debug and CI just name it, kata-sandbox

pod-cgroup
 kata-sandbox-<cid-sandboxcontainer>
  2487 /opt/kata/bin/qemu-system-x86_64 -name sandbox-090142d008e431ac5d4a...
  22494 /opt/kata/libexec/kata-containers/kata-proxy -listen-socket unix://...
  2654 /opt/kata/libexec/kata-containers/kata-shim -agent unix:///run/vc/s...

Depends-on: github.com/kata-containers/tests#1824

Fixes: kata-containers#1879
Signed-off-by: Carlos Venegas <vmjcarlos@gmail.com>
@jcvenegas jcvenegas force-pushed the sandbox_cgroup_only_true branch from 94b106a to 4e97883 Compare July 26, 2019 16:34
jcvenegas added a commit to jcvenegas/runtime that referenced this pull request Jul 26, 2019
* config: add option to use only pod cgroup.

Use option to use only pod cgroup.

- Get pod Cgroup getting the parent of the SANDBOX container.
- Do not create host cgroup for each container
- Place all kata processese in the cgroup
  * Proxy
  * Shim
  * Runtime
  * Hypervisor

Move hypervisor and other process after they are launched does not
provide an expected behavior for resource accounting.

If a process is moved after is created the cgroup will start to account
resources that are used after moved. For better host accounting move the
runtime to the cgroup sandbox before anything is created this way all
the processes created by the runtime are accounted as expected.

limitation for very small memory cgroups the runtime may not work as
expected as the process are going to get OOM.

TODO:
A minimal pod cgroup memory should be defined and validated.

This may reduce the perfomance of small pods as Hypervisor is
totally constrained (not partially anymore to vcpus).
It gives to the container administrator more acurrate data on
the resource usage.

By checking the pod cgroup directly
	cat /sys/fs/cgroup/<type>/pod-cgroup/data

Intra conainer usage is still possible to get via

kata-runtime events

Memory limitations: Because VMM is contrained in the pod sandox it could
be OOM so the "pod admin" should consider to add an extra memory for the
overhead.

TODO:
- [ ] Provide a minimal expected kata memory usage.
From Kata perspective we should investigate what is the overhead of
Kata for at least memory as this is a critical resource that could
get the all the pod worloads killed if the VM get OMM. Ideally
the Guest memory is larger enough so first Container workloads running
on it get killed and there is room to allow Kata in the host works
correctly.

It may be helpful to provide the pod overhead as part of cli/other api.
```
$ kata-runtime overhead
This probably will be hardcoded (as config or code )
Memory: 200 MB ( Proxy, VM and Shim)
So the runtime can check the pod cgroup and decide to fail nicely
asking for more memory in the parent cgroup for correct functionality
aditionally we could have metrics test to verify that it is possible to
creat a true or bash command with that defined memory at least.
```

OR

```
Daynamic:
cgroup usage - containers inside guest usage
```

- If the test fail due to change that increase the memory it should be
updated. And ideally should be updated when this is reduced.

info: docker cgroup before

Directory /sys/fs/cgroup/cpu/docker/:
└─kata_f12efaa8d43d55904f5fad8344a640536c74276bacff8bb8b0020fd056a4d5cc
  ├─29198 /opt/kata/bin/qemu-system-x86_64 -name sandbox-f12efaa8d43d55904f5f...
  └─29221 /opt/kata/libexec/kata-containers/kata-shim -agent unix:///run/vc/s...

- This mode Use all cgroups types  in the pod sandbox

This config enables the option to use the cgroup parent
of the container with annotation of Sandbox.

To make easy debug and CI just name it, kata-sandbox

pod-cgroup
 kata-sandbox-<cid-sandboxcontainer>
  2487 /opt/kata/bin/qemu-system-x86_64 -name sandbox-090142d008e431ac5d4a...
  22494 /opt/kata/libexec/kata-containers/kata-proxy -listen-socket unix://...
  2654 /opt/kata/libexec/kata-containers/kata-shim -agent unix:///run/vc/s...

Depends-on: github.com/kata-containers/tests#1824

Fixes: kata-containers#1879
Signed-off-by: Carlos Venegas <vmjcarlos@gmail.com>
@jodh-intel
Copy link
Copy Markdown

Ping @jcvenegas.

jcvenegas added a commit to jcvenegas/runtime that referenced this pull request Jul 30, 2019
* config: add option to use only pod cgroup.

Use option to use only pod cgroup.

- Get pod Cgroup getting the parent of the SANDBOX container.
- Do not create host cgroup for each container
- Place all kata processese in the cgroup
  * Proxy
  * Shim
  * Runtime
  * Hypervisor

Move hypervisor and other process after they are launched does not
provide an expected behavior for resource accounting.

If a process is moved after is created the cgroup will start to account
resources that are used after moved. For better host accounting move the
runtime to the cgroup sandbox before anything is created this way all
the processes created by the runtime are accounted as expected.

limitation for very small memory cgroups the runtime may not work as
expected as the process are going to get OOM.

TODO:
A minimal pod cgroup memory should be defined and validated.

This may reduce the perfomance of small pods as Hypervisor is
totally constrained (not partially anymore to vcpus).
It gives to the container administrator more acurrate data on
the resource usage.

By checking the pod cgroup directly
	cat /sys/fs/cgroup/<type>/pod-cgroup/data

Intra conainer usage is still possible to get via

kata-runtime events

Memory limitations: Because VMM is contrained in the pod sandox it could
be OOM so the "pod admin" should consider to add an extra memory for the
overhead.

TODO:
- [ ] Provide a minimal expected kata memory usage.
From Kata perspective we should investigate what is the overhead of
Kata for at least memory as this is a critical resource that could
get the all the pod worloads killed if the VM get OMM. Ideally
the Guest memory is larger enough so first Container workloads running
on it get killed and there is room to allow Kata in the host works
correctly.

It may be helpful to provide the pod overhead as part of cli/other api.
```
$ kata-runtime overhead
This probably will be hardcoded (as config or code )
Memory: 200 MB ( Proxy, VM and Shim)
So the runtime can check the pod cgroup and decide to fail nicely
asking for more memory in the parent cgroup for correct functionality
aditionally we could have metrics test to verify that it is possible to
creat a true or bash command with that defined memory at least.
```

OR

```
Daynamic:
cgroup usage - containers inside guest usage
```

- If the test fail due to change that increase the memory it should be
updated. And ideally should be updated when this is reduced.

info: docker cgroup before

Directory /sys/fs/cgroup/cpu/docker/:
└─kata_f12efaa8d43d55904f5fad8344a640536c74276bacff8bb8b0020fd056a4d5cc
  ├─29198 /opt/kata/bin/qemu-system-x86_64 -name sandbox-f12efaa8d43d55904f5f...
  └─29221 /opt/kata/libexec/kata-containers/kata-shim -agent unix:///run/vc/s...

- This mode Use all cgroups types  in the pod sandbox

This config enables the option to use the cgroup parent
of the container with annotation of Sandbox.

To make easy debug and CI just name it, kata-sandbox

pod-cgroup
 kata-sandbox-<cid-sandboxcontainer>
  2487 /opt/kata/bin/qemu-system-x86_64 -name sandbox-090142d008e431ac5d4a...
  22494 /opt/kata/libexec/kata-containers/kata-proxy -listen-socket unix://...
  2654 /opt/kata/libexec/kata-containers/kata-shim -agent unix:///run/vc/s...

Depends-on: github.com/kata-containers/tests#1824

Fixes: kata-containers#1879
Signed-off-by: Carlos Venegas <vmjcarlos@gmail.com>
jcvenegas added a commit to jcvenegas/runtime that referenced this pull request Jul 31, 2019
* config: add option to use only pod cgroup.

Use option to use only pod cgroup.

- Get pod Cgroup getting the parent of the SANDBOX container.
- Do not create host cgroup for each container
- Place all kata processese in the cgroup
  * Proxy
  * Shim
  * Runtime
  * Hypervisor

Move hypervisor and other process after they are launched does not
provide an expected behavior for resource accounting.

If a process is moved after is created the cgroup will start to account
resources that are used after moved. For better host accounting move the
runtime to the cgroup sandbox before anything is created this way all
the processes created by the runtime are accounted as expected.

limitation for very small memory cgroups the runtime may not work as
expected as the process are going to get OOM.

TODO:
A minimal pod cgroup memory should be defined and validated.

This may reduce the perfomance of small pods as Hypervisor is
totally constrained (not partially anymore to vcpus).
It gives to the container administrator more acurrate data on
the resource usage.

By checking the pod cgroup directly
	cat /sys/fs/cgroup/<type>/pod-cgroup/data

Intra conainer usage is still possible to get via

kata-runtime events

Memory limitations: Because VMM is contrained in the pod sandox it could
be OOM so the "pod admin" should consider to add an extra memory for the
overhead.

TODO:
- [ ] Provide a minimal expected kata memory usage.
From Kata perspective we should investigate what is the overhead of
Kata for at least memory as this is a critical resource that could
get the all the pod worloads killed if the VM get OMM. Ideally
the Guest memory is larger enough so first Container workloads running
on it get killed and there is room to allow Kata in the host works
correctly.

It may be helpful to provide the pod overhead as part of cli/other api.
```
$ kata-runtime overhead
This probably will be hardcoded (as config or code )
Memory: 200 MB ( Proxy, VM and Shim)
So the runtime can check the pod cgroup and decide to fail nicely
asking for more memory in the parent cgroup for correct functionality
aditionally we could have metrics test to verify that it is possible to
creat a true or bash command with that defined memory at least.
```

OR

```
Daynamic:
cgroup usage - containers inside guest usage
```

- If the test fail due to change that increase the memory it should be
updated. And ideally should be updated when this is reduced.

info: docker cgroup before

Directory /sys/fs/cgroup/cpu/docker/:
└─kata_f12efaa8d43d55904f5fad8344a640536c74276bacff8bb8b0020fd056a4d5cc
  ├─29198 /opt/kata/bin/qemu-system-x86_64 -name sandbox-f12efaa8d43d55904f5f...
  └─29221 /opt/kata/libexec/kata-containers/kata-shim -agent unix:///run/vc/s...

- This mode Use all cgroups types  in the pod sandbox

This config enables the option to use the cgroup parent
of the container with annotation of Sandbox.

To make easy debug and CI just name it, kata-sandbox

pod-cgroup
 kata-sandbox-<cid-sandboxcontainer>
  2487 /opt/kata/bin/qemu-system-x86_64 -name sandbox-090142d008e431ac5d4a...
  22494 /opt/kata/libexec/kata-containers/kata-proxy -listen-socket unix://...
  2654 /opt/kata/libexec/kata-containers/kata-shim -agent unix:///run/vc/s...

Depends-on: github.com/kata-containers/tests#1824

Fixes: kata-containers#1879
Signed-off-by: Carlos Venegas <vmjcarlos@gmail.com>
Signed-off-by: Jose Carlos Venegas Munoz <jose.carlos.venegas.munoz@intel.com>
jcvenegas added a commit to jcvenegas/runtime that referenced this pull request Aug 6, 2019
* config: add option to use only pod cgroup (SandboxCgroupOnly)

1) Given a PodSanbox container creation, let

```
cgroupPod=Parent(cgroupPath)
```

and
```
KataSandboxCgroup=${podCgroup}/kata-sandbox-<PodSandbox-id>
```

2) Then create a cgroup the KataSandboxCgroup

3) Join to the KataSandboxCgroup

Any process created by the runtime will be created in KataSandboxCgroup

  * Proxy
  * Shim
  * Runtime
  * Hypervisor

Examples:

Limitations

* For very small memory cgroups the runtime may not work as
expected as the process are going to get OOM.

* This may reduce the perfomance of small pods, where
the pod cgroup is sized to the same resources of the containers (no
overhead is considered).

Depends-on: github.com/kata-containers/tests#1824

Fixes: kata-containers#1879
Signed-off-by: Jose Carlos Venegas Munoz <jose.carlos.venegas.munoz@intel.com>
jcvenegas added a commit to jcvenegas/runtime that referenced this pull request Aug 7, 2019
* config: add option to use only pod cgroup (SandboxCgroupOnly)

1) Given a PodSanbox container creation, let

```
cgroupPod=Parent(cgroupPath)
```

and
```
KataSandboxCgroup=${podCgroup}/kata-sandbox-<PodSandbox-id>
```

2) Then create a cgroup the KataSandboxCgroup

3) Join to the KataSandboxCgroup

Any process created by the runtime will be created in KataSandboxCgroup

  * Proxy
  * Shim
  * Runtime
  * Hypervisor

Examples:

Limitations

* For very small memory cgroups the runtime may not work as
expected as the process are going to get OOM.

* This may reduce the perfomance of small pods, where
the pod cgroup is sized to the same resources of the containers (no
overhead is considered).

Depends-on: github.com/kata-containers/tests#1824

Fixes: kata-containers#1879
Signed-off-by: Jose Carlos Venegas Munoz <jose.carlos.venegas.munoz@intel.com>
@jcvenegas jcvenegas force-pushed the sandbox_cgroup_only_true branch 3 times, most recently from 89ed58f to 458cd9e Compare August 7, 2019 17:50
@jcvenegas
Copy link
Copy Markdown
Member Author

/test

@jcvenegas jcvenegas force-pushed the sandbox_cgroup_only_true branch 2 times, most recently from 074d649 to c6d3f09 Compare August 7, 2019 23:59
@jcvenegas
Copy link
Copy Markdown
Member Author

/test

Comment thread integration/kubernetes/init.sh Outdated
sudo systemctl start ${cri_runtime}
for i in {1..5}; do
sleep 5
if [ -f "${cri_runtime_socket}" ]; then
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see some other white space cleanup, as will as this retry. I’m okay w the whitepace being included in the commit, but what’s this for?

Copy link
Copy Markdown
Member Author

@jcvenegas jcvenegas Aug 8, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Because the cri-containerd job is running k8s two times, the contaienrd service is restarted and seems that kubeadm tries to connect and is not ready yet. I add a some retries to check the socket exist.

Copy link
Copy Markdown
Member

@egernst egernst left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@chavafg @GabyCT can y’all PTAL?

@jcvenegas jcvenegas added the do-not-merge PR has problems or depends on another label Aug 8, 2019
Comment thread .ci/enable_sandbox_cgroup_only.sh Outdated

echo "Replacing ${option_false} for ${option_true}"
# If is false
sudo sed -i "s,^${option_false},${option_true},g" "${KATA_ETC_CONFIG_PATH}"
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

would recommend to use crudini here. that way we don't need to verify if option is commented or true/false.

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

awesome, just added

Comment thread integration/docker/cgroups_test.go
Comment thread integration/kubernetes/init.sh Outdated
Comment thread .ci/enable_sandbox_cgroup_only.sh Outdated
Comment thread integration/kubernetes/init.sh Outdated
Comment thread .ci/enable_sandbox_cgroup_only.sh Outdated

if [ -f "${KATA_ETC_CONFIG_PATH}" ]; then
bk_file="${KATA_ETC_CONFIG_PATH}-${bk_suffix}"
echo "backup ${KATA_ETC_CONFIG_PATH} in ${bk_file}"
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

s/echo/info/g ?

Comment thread .ci/enable_sandbox_cgroup_only.sh Outdated
Comment thread integration/docker/cgroups_test.go Outdated
Comment thread integration/docker/cgroups_test.go Outdated
Comment thread integration/docker/cgroups_test.go
Comment thread .ci/enable_sandbox_cgroup_only.sh Outdated
jcvenegas added a commit to jcvenegas/runtime that referenced this pull request Aug 19, 2019
add option to eneable only pod cgroup (SandboxCgroupOnly)

Depends-on: github.com/kata-containers/tests#1824

Fixes: kata-containers#1879
Signed-off-by: Jose Carlos Venegas Munoz <jose.carlos.venegas.munoz@intel.com>
jcvenegas added a commit to jcvenegas/runtime that referenced this pull request Aug 19, 2019
When a new sandbox is created, join to its cgroup path
this will create all proxy, shim, etc in the sandbox cgroup.

Depends-on: github.com/kata-containers/tests#1824

Fixes: kata-containers#1879

Signed-off-by: Jose Carlos Venegas Munoz <jose.carlos.venegas.munoz@intel.com>
jcvenegas added a commit to jcvenegas/runtime that referenced this pull request Aug 19, 2019
When a new sandbox is created, join to its cgroup path
this will create all proxy, shim, etc in the sandbox cgroup.

Depends-on: github.com/kata-containers/tests#1824

Fixes: kata-containers#1879

Signed-off-by: Jose Carlos Venegas Munoz <jose.carlos.venegas.munoz@intel.com>
jcvenegas added a commit to jcvenegas/runtime that referenced this pull request Aug 29, 2019
add option to eneable only pod cgroup (SandboxCgroupOnly)

Depends-on: github.com/kata-containers/tests#1824

Fixes: kata-containers#1879
Signed-off-by: Jose Carlos Venegas Munoz <jose.carlos.venegas.munoz@intel.com>
@jcvenegas jcvenegas force-pushed the sandbox_cgroup_only_true branch 5 times, most recently from ac225d8 to 7b4d416 Compare August 30, 2019 06:23
@jcvenegas
Copy link
Copy Markdown
Member Author

/test

@jcvenegas jcvenegas removed the do-not-merge PR has problems or depends on another label Sep 9, 2019
@jcvenegas
Copy link
Copy Markdown
Member Author

kata-containers/runtime#1880 is merged now this can be merged too.
@chavafg @egernst @devimc

@jcvenegas jcvenegas force-pushed the sandbox_cgroup_only_true branch from 7b4d416 to 9c965ec Compare September 9, 2019 18:49
@jcvenegas
Copy link
Copy Markdown
Member Author

/test

Comment thread .ci/rollback_enable_sandbox_cgroup_only.sh Outdated
@jcvenegas jcvenegas force-pushed the sandbox_cgroup_only_true branch from 9c965ec to 5ed0047 Compare September 9, 2019 19:17
@jcvenegas
Copy link
Copy Markdown
Member Author

/test

@jcvenegas jcvenegas force-pushed the sandbox_cgroup_only_true branch from 5ed0047 to 9d03488 Compare September 9, 2019 22:14
@jcvenegas
Copy link
Copy Markdown
Member Author

/test

@jcvenegas
Copy link
Copy Markdown
Member Author

@jodh-intel @egernst @devimc ready to merge PTAL

Comment thread .ci/enable_sandbox_cgroup_only.sh Outdated
Comment thread .ci/enable_sandbox_cgroup_only.sh Outdated

info "Validate option is enabled"
current_value=$(kata-runtime kata-env --json | jq ".Runtime.SandboxCgroupOnly")
if [ "$current_value" != "${option_value}" ]; then
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could be simplified to:

[ "$current_value" != "${option_value}" ] && die "The option was not updated"

Comment thread .ci/rollback_enable_sandbox_cgroup_only.sh Outdated
@@ -0,0 +1,35 @@
#!/bin/bash
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This script is very similar to the enable one. To avoid the code duplication, I'd be tempted to merge them together into .ci/toggle_sandbox_cgroup_only.sh (or similar) and require the caller to specify an enable or disable argument. That will guarantee the two scripts won't diverge in future.

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done, looks better know thanks.

@jcvenegas jcvenegas force-pushed the sandbox_cgroup_only_true branch from 9d03488 to 547940b Compare September 12, 2019 20:22
@jcvenegas
Copy link
Copy Markdown
Member Author

/test

Copy link
Copy Markdown

@jodh-intel jodh-intel left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @jcvenegas.

lgtm.

@jcvenegas jcvenegas added the do-not-merge PR has problems or depends on another label Sep 13, 2019
- Add scripts to enable it if needed.
- Add script to rolback config to enable it.
- Add ci Job case for pod cgroup
- Run again kuberentes test with pod cgroup enabled for
K8S_CONTAINERD_JOB
- Check in docker test if option is enabled to not check cgroups in the
host.

Fixes: kata-containers#1810

Signed-off-by: Jose Carlos Venegas Munoz <jcvenega@jcvenega-nuc.zpn.intel.com>
@jcvenegas jcvenegas force-pushed the sandbox_cgroup_only_true branch from 547940b to 29618b3 Compare September 13, 2019 20:23
@jcvenegas
Copy link
Copy Markdown
Member Author

/test

@jcvenegas jcvenegas removed the do-not-merge PR has problems or depends on another label Sep 13, 2019
@jcvenegas
Copy link
Copy Markdown
Member Author

/test

@GabyCT GabyCT merged commit 6b4bb31 into kata-containers:master Oct 3, 2019
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

test: Add use case for sandbox_cgroup_only

6 participants