Skip to content
This repository was archived by the owner on May 12, 2021. It is now read-only.

Support hotplug PCIe in q35#2410

Merged
devimc merged 2 commits into
kata-containers:masterfrom
Jimmy-Xu:fix-q35-hotplug-pcie
Feb 6, 2020
Merged

Support hotplug PCIe in q35#2410
devimc merged 2 commits into
kata-containers:masterfrom
Jimmy-Xu:fix-q35-hotplug-pcie

Conversation

@Jimmy-Xu
Copy link
Copy Markdown
Contributor

@Jimmy-Xu Jimmy-Xu commented Jan 23, 2020

Issue
Some Nvidia GPUs (such as the Tesla P100) are Large BAR Space (>4GB) PCIe devices.
I can passthrough it to the kata container by adding --device=/dev/vfio/xxx.
When I use machine_type pc, then I run lspci in a kata container:

# lspci -vv -s $(lspci -nn | grep -i nvidia | head -n 1 | awk '{print $1}') | grep Region
  Region 0: Memory at c0000000 (32-bit, non-prefetchable) [disabled] [size=16M]
  Region 1: Memory at <unassigned> (64-bit, prefetchable) [disabled] <<<< more than 4GB
  Region 3: Memory at 4200000000 (64-bit, prefetchable) [disabled] [size=32M]

So I use machine_type q35, but when I run the kata container, the following error occurs:
Bus 'pcie.0' does not support hotplugging: unknown"

Solution
This PR is used to append pcie-root-port device to the qemu command line, then the PCIe device can be hot-plugged into pcie-root-port.

Fix Issue : #2432
Rely to PR: kata-containers/govmm#116
REF: https://github.com/qemu/qemu/blob/master/docs/pcie.txt

pciehp is required for hotplug PCIe device in guest kernel

@jodh-intel
Copy link
Copy Markdown

Thanks for raising @Jimmy-Xu, but the build is failing:

 github.com/kata-containers/runtime/virtcontainers
./qemu.go:1159:51: too many arguments in call to q.qmpMonitorCh.qmp.ExecuteVFIODeviceAdd
	have ("context".Context, string, string, string, string)
	want ("context".Context, string, string, string)
./qemu_arch_base.go:660:3: undefined: qemu.PCIeRootPortDevice

Also, please note that we require commits in a particular format which includes a "Fixes: #XXX" reference to a GitHub issue - see:

@grahamwhaley grahamwhaley requested a review from devimc January 23, 2020 09:50
@devimc
Copy link
Copy Markdown

devimc commented Jan 23, 2020

@Jimmy-Xu thanks for raising this but there is something that doesn't make sense to me

Bus 'pcie.0' does not support hotplugging: unknown"

did you enable hotplug_vfio_on_root_bus because of pcie-pci-bridges don't support Large BAR Space PCIe devices ?

@amshinde
Copy link
Copy Markdown
Member

@Jimmy-Xu
Copy link
Copy Markdown
Contributor Author

@Jimmy-Xu thanks for raising this but there is something that doesn't make sense to me

Bus 'pcie.0' does not support hotplugging: unknown"

did you enable hotplug_vfio_on_root_bus because of pcie-pci-bridges don't support Large BAR Space PCIe devices ?

@devimc
I did two tests:

Test 1: machine_type=pc and hotplug_vfio_on_root_bus=true
I run the following command line in kata container:

# lspci -s 00:06.0 -vv | grep Region
  Region 0: Memory at 80000000 (32-bit, non-prefetchable) [disabled] [size=16M]
  Region 1: Memory at <unassigned> (64-bit, prefetchable) [disabled]
  Region 3: Memory at 4240000000 (64-bit, prefetchable) [disabled] [size=32M]

Test 2: machine_type=q35 and hotplug_vfio_on_root_bus=true
I failed to run kata container, the error message is

Bus 'pcie.0' does not support hotplugging: unknown"

Then I checked the following documents
https://github.com/qemu/qemu/blob/master/docs/pcie.txt

5. Hot-plug

PCI devices can be hot-plugged into PCI Express to PCI and PCI-PCI Bridges.

PCI Express devices can be natively hot-plugged/hot-unplugged into/from
PCI Express Root Ports (and PCI Express Downstream Ports).

So I want to add pcie-root-port device to support hot-plugging PCIe device.
That's why I created PR kata-containers/govmm#116

@Jimmy-Xu
Copy link
Copy Markdown
Contributor Author

@Jimmy-Xu Have you taken a look at https://github.com/kata-containers/documentation/blob/master/use-cases/GPU-passthrough-and-Kata.md ?

@amshinde
Yes, I have.
When I use Nvidia Tesla T4(not large BAR space, 256MB), machine_type=pc and hotplug_vfio_on_root_bus=true will be OK.
However, this is not ideal for the Nvidia Tesla P100 (large BAR space, 16GB)

@Jimmy-Xu
Copy link
Copy Markdown
Contributor Author

Thanks for raising @Jimmy-Xu, but the build is failing:

 github.com/kata-containers/runtime/virtcontainers
./qemu.go:1159:51: too many arguments in call to q.qmpMonitorCh.qmp.ExecuteVFIODeviceAdd
	have ("context".Context, string, string, string, string)
	want ("context".Context, string, string, string)
./qemu_arch_base.go:660:3: undefined: qemu.PCIeRootPortDevice

Also, please note that we require commits in a particular format which includes a "Fixes: #XXX" reference to a GitHub issue - see:

@jodh-intel
This is because this PR depends on kata-containers/govmm#116.

I will create a issue for this.

@devimc
Copy link
Copy Markdown

devimc commented Jan 31, 2020

@Jimmy-Xu thanks for answering

@devimc
Copy link
Copy Markdown

devimc commented Jan 31, 2020

@Jimmy-Xu kata-containers/govmm#116 was merged

@Jimmy-Xu Jimmy-Xu changed the title Fix hotplug pcie in q35 Support hotplug PCIe in q35 Feb 3, 2020
Copy link
Copy Markdown

@devimc devimc left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks @Jimmy-Xu , two things:

  1. I guess you forgot to re-vendor govmm (please do it in a separate commit and include a shortlog, see [a])
  2. please fix your commit messages, see [b]

[a] - https://github.com/kata-containers/community/blob/master/CONTRIBUTING.md#re-vendor-prs
[b] - https://github.com/kata-containers/community/blob/master/CONTRIBUTING.md#examples

Comment thread cli/config/configuration-qemu.toml.in Outdated
Comment thread pkg/katautils/config-settings.go Outdated
Comment thread virtcontainers/pkg/oci/utils.go Outdated
Comment thread virtcontainers/qemu.go Outdated
Comment thread virtcontainers/qemu.go Outdated
Comment thread cli/config/configuration-qemu.toml.in Outdated
Comment thread pkg/katautils/config-settings.go Outdated
@Jimmy-Xu Jimmy-Xu force-pushed the fix-q35-hotplug-pcie branch 2 times, most recently from bbdb8d6 to c08acaf Compare February 4, 2020 16:08
Update github.com/intel/govmm.

shortlog:
    cab4709 qemu: Add pcie-root-port device support.

Fixes: kata-containers#2432

Signed-off-by: Jimmy Xu <junming.xjm@antfin.com>
Comment thread virtcontainers/device/drivers/utils.go Outdated
@amshinde
Copy link
Copy Markdown
Member

amshinde commented Feb 5, 2020

@Jimmy-Xu I havent looked at the PR yet too closely, but see the need for supporting hotplug of pcie root ports.
It would be great if you raise another PR to augment this document : https://github.com/kata-containers/documentation/blob/master/use-cases/GPU-passthrough-and-Kata.md with steps for hotplugging Nvidia Tesla P100 devices. This could be a separate doc itself dedicated for Nvidia GPUs.

@Jimmy-Xu Jimmy-Xu force-pushed the fix-q35-hotplug-pcie branch 2 times, most recently from 7b6284b to 95aa4f9 Compare February 5, 2020 10:41
@Jimmy-Xu
Copy link
Copy Markdown
Contributor Author

Jimmy-Xu commented Feb 5, 2020

thanks @Jimmy-Xu , two things:

  1. I guess you forgot to re-vendor govmm (please do it in a separate commit and include a shortlog, see [a])
  2. please fix your commit messages, see [b]

[a] - https://github.com/kata-containers/community/blob/master/CONTRIBUTING.md#re-vendor-prs
[b] - https://github.com/kata-containers/community/blob/master/CONTRIBUTING.md#examples

@devimc Done. Thanks for review!

@Jimmy-Xu
Copy link
Copy Markdown
Contributor Author

Jimmy-Xu commented Feb 5, 2020

@Jimmy-Xu I havent looked at the PR yet too closely, but see the need for supporting hotplug of pcie root ports.
It would be great if you raise another PR to augment this document : https://github.com/kata-containers/documentation/blob/master/use-cases/GPU-passthrough-and-Kata.md with steps for hotplugging Nvidia Tesla P100 devices. This could be a separate doc itself dedicated for Nvidia GPUs.

@amshinde OK, I will update the doc.

@Jimmy-Xu Jimmy-Xu force-pushed the fix-q35-hotplug-pcie branch 2 times, most recently from 37b233b to 01e251f Compare February 5, 2020 14:27
Copy link
Copy Markdown

@devimc devimc left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks @Jimmy-Xu , I left some comments

Comment thread virtcontainers/device/drivers/utils.go Outdated
Comment thread virtcontainers/device/drivers/utils.go Outdated
Comment thread virtcontainers/device/drivers/utils.go Outdated
Comment thread virtcontainers/device/drivers/utils.go Outdated
Comment thread virtcontainers/device/drivers/vfio.go Outdated
Comment thread virtcontainers/qemu.go Outdated
Comment thread virtcontainers/qemu_arm64.go Outdated
Comment thread virtcontainers/qemu_ppc64le.go Outdated
Comment thread virtcontainers/qemu_s390x.go Outdated
Comment thread virtcontainers/qemu_amd64.go Outdated
@Jimmy-Xu Jimmy-Xu force-pushed the fix-q35-hotplug-pcie branch 3 times, most recently from b100cd8 to 0087853 Compare February 5, 2020 18:16
@Jimmy-Xu
Copy link
Copy Markdown
Contributor Author

Jimmy-Xu commented Feb 5, 2020

@devimc updated again. travis-ci/pr passed.

Copy link
Copy Markdown

@devimc devimc left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks @Jimmy-Xu , lgtm, I'd like to wait for @jodh-intel 's review

@devimc
Copy link
Copy Markdown

devimc commented Feb 5, 2020

/test

@devimc devimc merged commit bd7d310 into kata-containers:master Feb 6, 2020
@Jimmy-Xu Jimmy-Xu deleted the fix-q35-hotplug-pcie branch February 17, 2020 09:01
@amshinde
Copy link
Copy Markdown
Member

@Jimmy-Xu Since you have recently gone through the exercise about adding support for Nvidia GPUs, can you add documentation for it, similar to what we have here:
https://github.com/kata-containers/documentation/blob/master/use-cases/GPU-passthrough-and-Kata.md

@wansuyoo
Copy link
Copy Markdown

wansuyoo commented Mar 12, 2020

@Jimmy-Xu
I was wondering if I could get some help how to hotplug PCIe.
In my case, an error is occurring when hotplugging VFIO PCIe device.

 3월 12 18:25:52 wansu-ubuntu kata-runtime[11714]: time="2020-03-12T18:25:52.601608969+09:00" level=info msg="Start hot-plug VFIO device" arch=amd64 command=create container=5b8e522d935ae7c6860c28c229b9c424c226d5e770195c0c955fa025c05c12d9 device-info="{\"IsPCIe\":false,\"Type\":1,\"ID\":\"vfio-1a834a602eb5b28d0\",\"BDF\":\"00:01.0\",\"SysfsDev\":\"\",\"VendorID\":\"\",\"DeviceID\":\"\",\"Class\":\"0x060400\",\"Bus\":\"\"}" hotplug-vfio-on-root-bus=true machine-type=q35 name=kata-runtime pcie-root-port=2 pid=11714 source=virtcontainers subsystem=qemu
 3월 12 18:25:52 wansu-ubuntu kata-runtime[11714]: time="2020-03-12T18:25:52.602983656+09:00" level=error msg="failed to hotplug VFIO device" arch=amd64 command=create container=5b8e522d935ae7c6860c28c229b9c424c226d5e770195c0c955fa025c05c12d9 error="QMP command failed: Bus 'pcie.0' does not support hotplugging" name=kata-runtime pid=11714 sandbox=5b8e522d935ae7c6860c28c229b9c424c226d5e770195c0c955fa025c05c12d9 source=virtcontainers subsystem=sandbox vfio-device-BDF="00:01.0" vfio-device-ID=vfio-1a834a602eb5b28d0
 3월 12 18:25:52 wansu-ubuntu kata-runtime[11714]: time="2020-03-12T18:25:52.603023781+09:00" level=error msg="Failed to add device" arch=amd64 command=create container=5b8e522d935ae7c6860c28c229b9c424c226d5e770195c0c955fa025c05c12d9 error="QMP command failed: Bus 'pcie.0' does not support hotplugging" name=kata-runtime pid=11714 source=virtcontainers subsystem=device

I think it is caused from "IsPCIe":false returned from isPCIeDevice.
And it seems to because of this problem. SysBusPciSlotsPath is empty at Host machine.

$ ls -al /sys/bus/pci/slots/
total 0
drwxr-xr-x 2 root root 0  3월 12 13:57 .
drwxr-xr-x 5 root root 0  3월 12 13:57 ..

It looks like /sys/bus/pci/slots/ enumerates the "hotplug" slot, but in my case, it wasn't created.
PCIe hotplugging doesn't seem to work in Host machine.
The question is that Is it compatible HW or SW, Mainboard or PCIe card, BIOS setting or kernel modules, etc.

Below is my Host machine status.

$ kata-runtime --version
kata-runtime  : 1.11.0-alpha0
$ grep "CONFIG_HOTPLUG_PCI_ACPI=" /boot/config-`uname -r`
CONFIG_HOTPLUG_PCI_ACPI=y
$ grep "CONFIG_HOTPLUG_PCI=" /boot/config-`uname -r`
CONFIG_HOTPLUG_PCI=y
$ grep "CONFIG_MEMORY_HOTPLUG=" /boot/config-`uname -r`
CONFIG_MEMORY_HOTPLUG=y
$ grep "CONFIG_MEMORY_HOTPLUG_DEFAULT_ONLINE=" /boot/config-`uname -r`
CONFIG_MEMORY_HOTPLUG_DEFAULT_ONLINE=y

@amshinde
Copy link
Copy Markdown
Member

3월 12 18:25:52 wansu-ubuntu kata-runtime[11714]: time="2020-03-12T18:25:52.603023781+09:00" level=error msg="Failed to add device" arch=amd64 command=create container=5b8e522d935ae7c6860c28c229b9c424c226d5e770195c0c955fa025c05c12d9 error="QMP command failed: Bus 'pcie.0' does not support hotplugging" name=kata-runtime pid=11714 source=virtcontainers subsystem=device

@wansuyoo The above error you are seeing is because the hotplug is being done on the pcie root bus. If you want to go with PCIe hotplug, you need to set machine_type to q35 in the kata configuration.toml file as the default pc platform does not support pcie. You would also need to uncomment the line pcie_root_port in that file (

#pcie_root_port = 2
),
so that the hotplug happens on the pcie root port instead.

You would need the above steps in case you are interested in PCIe hotplug. In case PCI hotplug suits your use case you can go the the default machine_type = pc and enable the config hotplug_vfio_on_root_bus. This option is required for supporting devices with a large BAR.

@wansuyoo
Copy link
Copy Markdown

@amshinde
Yes, I enabled configuration those you were commented.
In my case, "/sys/bus/pci/slots/" is empty on host machine.
So, I asked because I was curious about this.

machine_type = "q35"
hotplug_vfio_on_root_bus = true
pcie_root_port = 2
Show kata-collect-data.sh details

Meta details

Running kata-collect-data.sh version 1.11.0-alpha0 (commit d54723a5c4af5c1488af16b59a055b54ca136050-dirty) at 2020-03-13.15:02:16.163320324+0900.


Runtime is /usr/local/bin/kata-runtime.

kata-env

Output of "/usr/local/bin/kata-runtime kata-env":

[Meta]
  Version = "1.0.23"

[Runtime]
  Debug = false
  Trace = false
  DisableGuestSeccomp = true
  DisableNewNetNs = false
  SandboxCgroupOnly = false
  Path = "/usr/local/bin/kata-runtime"
  [Runtime.Version]
    OCI = "1.0.1-dev"
    [Runtime.Version.Version]
      Semver = "1.11.0-alpha0"
      Major = 1
      Minor = 11
      Patch = 0
      Commit = "d54723a5c4af5c1488af16b59a055b54ca136050-dirty"
  [Runtime.Config]
    Path = "/etc/kata-containers/configuration.toml"

[Hypervisor]
  MachineType = "q35"
  Version = "QEMU emulator version 2.11.1(Debian 1:2.11+dfsg-1ubuntu7.23)\nCopyright (c) 2003-2017 Fabrice Bellard and the QEMU Project developers"
  Path = "/usr/bin/qemu-system-x86_64"
  BlockDeviceDriver = "virtio-scsi"
  EntropySource = "/dev/urandom"
  SharedFS = "virtio-9p"
  Msize9p = 8192
  MemorySlots = 10
  PCIeRootPort = 2
  HotplugVFIOOnRootBus = true
  Debug = false
  UseVSock = false

[Image]
  Path = "/usr/share/kata-containers/kata-containers-2020-03-11-19:41:34.991221261+0900-397ce26"

[Kernel]
  Path = "/usr/share/kata-containers/vmlinuz-5.4.15-68"
  Parameters = "systemd.unit=kata-containers.target systemd.mask=systemd-networkd.service systemd.mask=systemd-networkd.socket agent.log=debug initcall_debug"

[Initrd]
  Path = ""

[Proxy]
  Type = "kataProxy"
  Path = "/usr/libexec/kata-containers/kata-proxy"
  Debug = false
  [Proxy.Version]
    Semver = "1.11.0-alpha0-4ec94c8"
    Major = 1
    Minor = 11
    Patch = 0
    Commit = "4ec94c8"

[Shim]
  Type = "kataShim"
  Path = "/usr/libexec/kata-containers/kata-shim"
  Debug = false
  [Shim.Version]
    Semver = "1.11.0-alpha0-834e0b3"
    Major = 1
    Minor = 11
    Patch = 0
    Commit = "834e0b3"

[Agent]
  Type = "kata"
  Debug = false
  Trace = false
  TraceMode = ""
  TraceType = ""

[Host]
  Kernel = "5.3.0-40-generic"
  Architecture = "amd64"
  VMContainerCapable = true
  SupportVSocks = true
  [Host.Distro]
    Name = "Ubuntu"
    Version = "18.04"
  [Host.CPU]
    Vendor = "GenuineIntel"
    Model = "Intel(R) Core(TM) i7-7700 CPU @ 3.60GHz"

[Netmon]
  Path = "/usr/libexec/kata-containers/kata-netmon"
  Debug = false
  Enable = false
  [Netmon.Version]
    Semver = "1.11.0-alpha0"
    Major = 1
    Minor = 11
    Patch = 0
    Commit = "<<unknown>>"

Runtime config files

Runtime default config files

/etc/kata-containers/configuration.toml
/usr/share/defaults/kata-containers/configuration.toml

Runtime config file contents

Output of "cat "/etc/kata-containers/configuration.toml"":

# Copyright (c) 2017-2019 Intel Corporation
#
# SPDX-License-Identifier: Apache-2.0
#

# XXX: WARNING: this file is auto-generated.
# XXX:
# XXX: Source file: "cli/config/configuration-qemu.toml.in"
# XXX: Project:
# XXX:   Name: Kata Containers
# XXX:   Type: kata

[hypervisor.qemu]
path = "/usr/bin/qemu-system-x86_64"
kernel = "/usr/share/kata-containers/vmlinuz.container"
#initrd = "/usr/share/kata-containers/kata-containers-initrd.img"
image = "/usr/share/kata-containers/kata-containers.img"
machine_type = "q35"

# Optional space-separated list of options to pass to the guest kernel.
# For example, use `kernel_params = "vsyscall=emulate"` if you are having
# trouble running pre-2.15 glibc.
#
# WARNING: - any parameter specified here will take priority over the default
# parameter value of the same name used to start the virtual machine.
# Do not set values here unless you understand the impact of doing so as you
# may stop the virtual machine from booting.
# To see the list of default parameters, enable hypervisor debug, create a
# container and look for 'default-kernel-parameters' log entries.
kernel_params = " agent.log=debug initcall_debug"

# Path to the firmware.
# If you want that qemu uses the default firmware leave this option empty
firmware = ""

# Machine accelerators
# comma-separated list of machine accelerators to pass to the hypervisor.
# For example, `machine_accelerators = "nosmm,nosmbus,nosata,nopit,static-prt,nofw"`
machine_accelerators=""

# Default number of vCPUs per SB/VM:
# unspecified or 0                --> will be set to 1
# < 0                             --> will be set to the actual number of physical cores
# > 0 <= number of physical cores --> will be set to the specified number
# > number of physical cores      --> will be set to the actual number of physical cores
default_vcpus = 1

# Default maximum number of vCPUs per SB/VM:
# unspecified or == 0             --> will be set to the actual number of physical cores or to the maximum number
#                                     of vCPUs supported by KVM if that number is exceeded
# > 0 <= number of physical cores --> will be set to the specified number
# > number of physical cores      --> will be set to the actual number of physical cores or to the maximum number
#                                     of vCPUs supported by KVM if that number is exceeded
# WARNING: Depending of the architecture, the maximum number of vCPUs supported by KVM is used when
# the actual number of physical cores is greater than it.
# WARNING: Be aware that this value impacts the virtual machine's memory footprint and CPU
# the hotplug functionality. For example, `default_maxvcpus = 240` specifies that until 240 vCPUs
# can be added to a SB/VM, but the memory footprint will be big. Another example, with
# `default_maxvcpus = 8` the memory footprint will be small, but 8 will be the maximum number of
# vCPUs supported by the SB/VM. In general, we recommend that you do not edit this variable,
# unless you know what are you doing.
default_maxvcpus = 0

# Bridges can be used to hot plug devices.
# Limitations:
# * Currently only pci bridges are supported
# * Until 30 devices per bridge can be hot plugged.
# * Until 5 PCI bridges can be cold plugged per VM.
#   This limitation could be a bug in qemu or in the kernel
# Default number of bridges per SB/VM:
# unspecified or 0   --> will be set to 1
# > 1 <= 5           --> will be set to the specified number
# > 5                --> will be set to 5
default_bridges = 1

# Default memory size in MiB for SB/VM.
# If unspecified then it will be set 2048 MiB.
default_memory = 4096
#
# Default memory slots per SB/VM.
# If unspecified then it will be set 10.
# This is will determine the times that memory will be hotadded to sandbox/VM.
#memory_slots = 10

# The size in MiB will be plused to max memory of hypervisor.
# It is the memory address space for the NVDIMM devie.
# If set block storage driver (block_device_driver) to "nvdimm",
# should set memory_offset to the size of block device.
# Default 0
#memory_offset = 0

# Specifies virtio-mem will be enabled or not.
# Please note that this option should be used with the command
# "echo 1 > /proc/sys/vm/overcommit_memory".
# Default false
#enable_virtio_mem = true

# Disable block device from being used for a container's rootfs.
# In case of a storage driver like devicemapper where a container's 
# root file system is backed by a block device, the block device is passed
# directly to the hypervisor for performance reasons. 
# This flag prevents the block device from being passed to the hypervisor, 
# 9pfs is used instead to pass the rootfs.
disable_block_device_use = false

# Shared file system type:
#   - virtio-9p (default)
#   - virtio-fs
shared_fs = "virtio-9p"

# Path to vhost-user-fs daemon.
virtio_fs_daemon = "/usr/bin/virtiofsd"

# Default size of DAX cache in MiB
virtio_fs_cache_size = 1024

# Extra args for virtiofsd daemon
#
# Format example:
#   ["-o", "arg1=xxx,arg2", "-o", "hello world", "--arg3=yyy"]
#
# see `virtiofsd -h` for possible options.
virtio_fs_extra_args = []

# Cache mode:
#
#  - none
#    Metadata, data, and pathname lookup are not cached in guest. They are
#    always fetched from host and any changes are immediately pushed to host.
#
#  - auto
#    Metadata and pathname lookup cache expires after a configured amount of
#    time (default is 1 second). Data is cached while the file is open (close
#    to open consistency).
#
#  - always
#    Metadata, data, and pathname lookup are cached in guest and never expire.
virtio_fs_cache = "always"

# Block storage driver to be used for the hypervisor in case the container
# rootfs is backed by a block device. This is virtio-scsi, virtio-blk
# or nvdimm.
block_device_driver = "virtio-scsi"

# Specifies cache-related options will be set to block devices or not.
# Default false
#block_device_cache_set = true

# Specifies cache-related options for block devices.
# Denotes whether use of O_DIRECT (bypass the host page cache) is enabled.
# Default false
#block_device_cache_direct = true

# Specifies cache-related options for block devices.
# Denotes whether flush requests for the device are ignored.
# Default false
#block_device_cache_noflush = true

# Enable iothreads (data-plane) to be used. This causes IO to be
# handled in a separate IO thread. This is currently only implemented
# for SCSI.
#
enable_iothreads = false

# Enable pre allocation of VM RAM, default false
# Enabling this will result in lower container density
# as all of the memory will be allocated and locked
# This is useful when you want to reserve all the memory
# upfront or in the cases where you want memory latencies
# to be very predictable
# Default false
#enable_mem_prealloc = true

# Enable huge pages for VM RAM, default false
# Enabling this will result in the VM memory
# being allocated using huge pages.
# This is useful when you want to use vhost-user network
# stacks within the container. This will automatically 
# result in memory pre allocation
#enable_hugepages = true

# Enable file based guest memory support. The default is an empty string which
# will disable this feature. In the case of virtio-fs, this is enabled
# automatically and '/dev/shm' is used as the backing folder.
# This option will be ignored if VM templating is enabled.
#file_mem_backend = ""

# Enable swap of vm memory. Default false.
# The behaviour is undefined if mem_prealloc is also set to true
#enable_swap = true

# This option changes the default hypervisor and kernel parameters
# to enable debug output where available. This extra output is added
# to the proxy logs, but only when proxy debug is also enabled.
# 
# Default false
#enable_debug = true

# Disable the customizations done in the runtime when it detects
# that it is running on top a VMM. This will result in the runtime
# behaving as it would when running on bare metal.
# 
#disable_nesting_checks = true

# This is the msize used for 9p shares. It is the number of bytes 
# used for 9p packet payload.
#msize_9p = 8192

# If true and vsocks are supported, use vsocks to communicate directly
# with the agent and no proxy is started, otherwise use unix
# sockets and start a proxy to communicate with the agent.
# Default false
#use_vsock = true

# If false and nvdimm is supported, use nvdimm device to plug guest image.
# Otherwise virtio-block device is used.
# Default is false
#disable_image_nvdimm = true

# VFIO devices are hotplugged on a bridge by default. 
# Enable hotplugging on root bus. This may be required for devices with
# a large PCI bar, as this is a current limitation with hotplugging on 
# a bridge. This value is valid for "pc" machine type.
# Default false
hotplug_vfio_on_root_bus = true

# Before hot plugging a PCIe device, you need to add a pcie_root_port device.
# Use this parameter when using some large PCI bar devices, such as Nvidia GPU
# The value means the number of pcie_root_port
# This value is valid when hotplug_vfio_on_root_bus is true and machine_type is "q35"
# Default 0
pcie_root_port = 2

# If vhost-net backend for virtio-net is not desired, set to true. Default is false, which trades off
# security (vhost-net runs ring0) for network I/O performance. 
#disable_vhost_net = true

#
# Default entropy source.
# The path to a host source of entropy (including a real hardware RNG)
# /dev/urandom and /dev/random are two main options.
# Be aware that /dev/random is a blocking source of entropy.  If the host
# runs out of entropy, the VMs boot time will increase leading to get startup
# timeouts.
# The source of entropy /dev/urandom is non-blocking and provides a
# generally acceptable source of entropy. It should work well for pretty much
# all practical purposes.
#entropy_source= "/dev/urandom"

# Path to OCI hook binaries in the *guest rootfs*.
# This does not affect host-side hooks which must instead be added to
# the OCI spec passed to the runtime.
#
# You can create a rootfs with hooks by customizing the osbuilder scripts:
# https://github.com/kata-containers/osbuilder
#
# Hooks must be stored in a subdirectory of guest_hook_path according to their
# hook type, i.e. "guest_hook_path/{prestart,postart,poststop}".
# The agent will scan these directories for executable files and add them, in
# lexicographical order, to the lifecycle of the guest container.
# Hooks are executed in the runtime namespace of the guest. See the official documentation:
# https://github.com/opencontainers/runtime-spec/blob/v1.0.1/config.md#posix-platform-hooks
# Warnings will be logged if any error is encountered will scanning for hooks,
# but it will not abort container execution.
#guest_hook_path = "/usr/share/oci/hooks"

[factory]
# VM templating support. Once enabled, new VMs are created from template
# using vm cloning. They will share the same initial kernel, initramfs and
# agent memory by mapping it readonly. It helps speeding up new container
# creation and saves a lot of memory if there are many kata containers running
# on the same host.
#
# When disabled, new VMs are created from scratch.
#
# Note: Requires "initrd=" to be set ("image=" is not supported).
#
# Default false
#enable_template = true

# Specifies the path of template.
#
# Default "/run/vc/vm/template"
#template_path = "/run/vc/vm/template"

# The number of caches of VMCache:
# unspecified or == 0   --> VMCache is disabled
# > 0                   --> will be set to the specified number
#
# VMCache is a function that creates VMs as caches before using it.
# It helps speed up new container creation.
# The function consists of a server and some clients communicating
# through Unix socket.  The protocol is gRPC in protocols/cache/cache.proto.
# The VMCache server will create some VMs and cache them by factory cache.
# It will convert the VM to gRPC format and transport it when gets
# requestion from clients.
# Factory grpccache is the VMCache client.  It will request gRPC format
# VM and convert it back to a VM.  If VMCache function is enabled,
# kata-runtime will request VM from factory grpccache when it creates
# a new sandbox.
#
# Default 0
#vm_cache_number = 0

# Specify the address of the Unix socket that is used by VMCache.
#
# Default /var/run/kata-containers/cache.sock
#vm_cache_endpoint = "/var/run/kata-containers/cache.sock"

[proxy.kata]
path = "/usr/libexec/kata-containers/kata-proxy"

# If enabled, proxy messages will be sent to the system log
# (default: disabled)
#enable_debug = true

[shim.kata]
path = "/usr/libexec/kata-containers/kata-shim"

# If enabled, shim messages will be sent to the system log
# (default: disabled)
#enable_debug = true

# If enabled, the shim will create opentracing.io traces and spans.
# (See https://www.jaegertracing.io/docs/getting-started).
#
# Note: By default, the shim runs in a separate network namespace. Therefore,
# to allow it to send trace details to the Jaeger agent running on the host,
# it is necessary to set 'disable_new_netns=true' so that it runs in the host
# network namespace.
#
# (default: disabled)
#enable_tracing = true

[agent.kata]
# If enabled, make the agent display debug-level messages.
# (default: disabled)
#enable_debug = true

# Enable agent tracing.
#
# If enabled, the default trace mode is "dynamic" and the
# default trace type is "isolated". The trace mode and type are set
# explicity with the `trace_type=` and `trace_mode=` options.
#
# Notes:
#
# - Tracing is ONLY enabled when `enable_tracing` is set: explicitly
#   setting `trace_mode=` and/or `trace_type=` without setting `enable_tracing`
#   will NOT activate agent tracing.
#
# - See https://github.com/kata-containers/agent/blob/master/TRACING.md for
#   full details.
#
# (default: disabled)
#enable_tracing = true
#
#trace_mode = "dynamic"
#trace_type = "isolated"

# Comma separated list of kernel modules and their parameters.
# These modules will be loaded in the guest kernel using modprobe(8).
# The following example can be used to load two kernel modules with parameters
#  - kernel_modules=["e1000e InterruptThrottleRate=3000,3000,3000 EEE=1", "i915 enable_ppgtt=0"]
# The first word is considered as the module name and the rest as its parameters.
# Container will not be started when:
#  * A kernel module is specified and the modprobe command is not installed in the guest
#    or it fails loading the module.
#  * The module is not available in the guest or it doesn't met the guest kernel
#    requirements, like architecture and version.
#
kernel_modules=[]


[netmon]
# If enabled, the network monitoring process gets started when the
# sandbox is created. This allows for the detection of some additional
# network being added to the existing network namespace, after the
# sandbox has been created.
# (default: disabled)
#enable_netmon = true

# Specify the path to the netmon binary.
path = "/usr/libexec/kata-containers/kata-netmon"

# If enabled, netmon messages will be sent to the system log
# (default: disabled)
#enable_debug = true

[runtime]
# If enabled, the runtime will log additional debug messages to the
# system log
# (default: disabled)
#enable_debug = true
#
# Internetworking model
# Determines how the VM should be connected to the
# the container network interface
# Options:
#
#   - macvtap
#     Used when the Container network interface can be bridged using
#     macvtap.
#
#   - none
#     Used when customize network. Only creates a tap device. No veth pair.
#
#   - tcfilter
#     Uses tc filter rules to redirect traffic from the network interface
#     provided by plugin to a tap interface connected to the VM.
#
internetworking_model="tcfilter"

# disable guest seccomp
# Determines whether container seccomp profiles are passed to the virtual
# machine and applied by the kata agent. If set to true, seccomp is not applied
# within the guest
# (default: true)
disable_guest_seccomp=true

# If enabled, the runtime will create opentracing.io traces and spans.
# (See https://www.jaegertracing.io/docs/getting-started).
# (default: disabled)
#enable_tracing = true

# If enabled, the runtime will not create a network namespace for shim and hypervisor processes.
# This option may have some potential impacts to your host. It should only be used when you know what you're doing.
# `disable_new_netns` conflicts with `enable_netmon`
# `disable_new_netns` conflicts with `internetworking_model=tcfilter` and `internetworking_model=macvtap`. It works only
# with `internetworking_model=none`. The tap device will be in the host network namespace and can connect to a bridge
# (like OVS) directly.
# If you are using docker, `disable_new_netns` only works with `docker run --net=none`
# (default: false)
#disable_new_netns = true

# if enabled, the runtime will add all the kata processes inside one dedicated cgroup.
# The container cgroups in the host are not created, just one single cgroup per sandbox.
# The runtime caller is free to restrict or collect cgroup stats of the overall Kata sandbox.
# The sandbox cgroup path is the parent cgroup of a container with the PodSandbox annotation.
# The sandbox cgroup is constrained if there is no container type annotation.
# See: https://godoc.org/github.com/kata-containers/runtime/virtcontainers#ContainerType
sandbox_cgroup_only=false

# Enabled experimental feature list, format: ["a", "b"].
# Experimental features are features not stable enough for production,
# they may break compatibility, and are prepared for a big version bump.
# Supported experimental features:
# (default: [])
experimental=[]

Output of "cat "/usr/share/defaults/kata-containers/configuration.toml"":

# Copyright (c) 2017-2019 Intel Corporation
#
# SPDX-License-Identifier: Apache-2.0
#

# XXX: WARNING: this file is auto-generated.
# XXX:
# XXX: Source file: "cli/config/configuration-qemu.toml.in"
# XXX: Project:
# XXX:   Name: Kata Containers
# XXX:   Type: kata

[hypervisor.qemu]
path = "/usr/bin/qemu-system-x86_64"
kernel = "/usr/share/kata-containers/vmlinuz.container"
initrd = "/usr/share/kata-containers/kata-containers-initrd.img"
image = "/usr/share/kata-containers/kata-containers.img"
machine_type = "pc"

# Optional space-separated list of options to pass to the guest kernel.
# For example, use `kernel_params = "vsyscall=emulate"` if you are having
# trouble running pre-2.15 glibc.
#
# WARNING: - any parameter specified here will take priority over the default
# parameter value of the same name used to start the virtual machine.
# Do not set values here unless you understand the impact of doing so as you
# may stop the virtual machine from booting.
# To see the list of default parameters, enable hypervisor debug, create a
# container and look for 'default-kernel-parameters' log entries.
kernel_params = ""

# Path to the firmware.
# If you want that qemu uses the default firmware leave this option empty
firmware = ""

# Machine accelerators
# comma-separated list of machine accelerators to pass to the hypervisor.
# For example, `machine_accelerators = "nosmm,nosmbus,nosata,nopit,static-prt,nofw"`
machine_accelerators=""

# Default number of vCPUs per SB/VM:
# unspecified or 0                --> will be set to 1
# < 0                             --> will be set to the actual number of physical cores
# > 0 <= number of physical cores --> will be set to the specified number
# > number of physical cores      --> will be set to the actual number of physical cores
default_vcpus = 1

# Default maximum number of vCPUs per SB/VM:
# unspecified or == 0             --> will be set to the actual number of physical cores or to the maximum number
#                                     of vCPUs supported by KVM if that number is exceeded
# > 0 <= number of physical cores --> will be set to the specified number
# > number of physical cores      --> will be set to the actual number of physical cores or to the maximum number
#                                     of vCPUs supported by KVM if that number is exceeded
# WARNING: Depending of the architecture, the maximum number of vCPUs supported by KVM is used when
# the actual number of physical cores is greater than it.
# WARNING: Be aware that this value impacts the virtual machine's memory footprint and CPU
# the hotplug functionality. For example, `default_maxvcpus = 240` specifies that until 240 vCPUs
# can be added to a SB/VM, but the memory footprint will be big. Another example, with
# `default_maxvcpus = 8` the memory footprint will be small, but 8 will be the maximum number of
# vCPUs supported by the SB/VM. In general, we recommend that you do not edit this variable,
# unless you know what are you doing.
default_maxvcpus = 0

# Bridges can be used to hot plug devices.
# Limitations:
# * Currently only pci bridges are supported
# * Until 30 devices per bridge can be hot plugged.
# * Until 5 PCI bridges can be cold plugged per VM.
#   This limitation could be a bug in qemu or in the kernel
# Default number of bridges per SB/VM:
# unspecified or 0   --> will be set to 1
# > 1 <= 5           --> will be set to the specified number
# > 5                --> will be set to 5
default_bridges = 1

# Default memory size in MiB for SB/VM.
# If unspecified then it will be set 2048 MiB.
default_memory = 2048
#
# Default memory slots per SB/VM.
# If unspecified then it will be set 10.
# This is will determine the times that memory will be hotadded to sandbox/VM.
#memory_slots = 10

# The size in MiB will be plused to max memory of hypervisor.
# It is the memory address space for the NVDIMM devie.
# If set block storage driver (block_device_driver) to "nvdimm",
# should set memory_offset to the size of block device.
# Default 0
#memory_offset = 0

# Specifies virtio-mem will be enabled or not.
# Please note that this option should be used with the command
# "echo 1 > /proc/sys/vm/overcommit_memory".
# Default false
#enable_virtio_mem = true

# Disable block device from being used for a container's rootfs.
# In case of a storage driver like devicemapper where a container's 
# root file system is backed by a block device, the block device is passed
# directly to the hypervisor for performance reasons. 
# This flag prevents the block device from being passed to the hypervisor, 
# 9pfs is used instead to pass the rootfs.
disable_block_device_use = false

# Shared file system type:
#   - virtio-9p (default)
#   - virtio-fs
shared_fs = "virtio-9p"

# Path to vhost-user-fs daemon.
virtio_fs_daemon = "/usr/bin/virtiofsd"

# Default size of DAX cache in MiB
virtio_fs_cache_size = 1024

# Extra args for virtiofsd daemon
#
# Format example:
#   ["-o", "arg1=xxx,arg2", "-o", "hello world", "--arg3=yyy"]
#
# see `virtiofsd -h` for possible options.
virtio_fs_extra_args = []

# Cache mode:
#
#  - none
#    Metadata, data, and pathname lookup are not cached in guest. They are
#    always fetched from host and any changes are immediately pushed to host.
#
#  - auto
#    Metadata and pathname lookup cache expires after a configured amount of
#    time (default is 1 second). Data is cached while the file is open (close
#    to open consistency).
#
#  - always
#    Metadata, data, and pathname lookup are cached in guest and never expire.
virtio_fs_cache = "always"

# Block storage driver to be used for the hypervisor in case the container
# rootfs is backed by a block device. This is virtio-scsi, virtio-blk
# or nvdimm.
block_device_driver = "virtio-scsi"

# Specifies cache-related options will be set to block devices or not.
# Default false
#block_device_cache_set = true

# Specifies cache-related options for block devices.
# Denotes whether use of O_DIRECT (bypass the host page cache) is enabled.
# Default false
#block_device_cache_direct = true

# Specifies cache-related options for block devices.
# Denotes whether flush requests for the device are ignored.
# Default false
#block_device_cache_noflush = true

# Enable iothreads (data-plane) to be used. This causes IO to be
# handled in a separate IO thread. This is currently only implemented
# for SCSI.
#
enable_iothreads = false

# Enable pre allocation of VM RAM, default false
# Enabling this will result in lower container density
# as all of the memory will be allocated and locked
# This is useful when you want to reserve all the memory
# upfront or in the cases where you want memory latencies
# to be very predictable
# Default false
#enable_mem_prealloc = true

# Enable huge pages for VM RAM, default false
# Enabling this will result in the VM memory
# being allocated using huge pages.
# This is useful when you want to use vhost-user network
# stacks within the container. This will automatically 
# result in memory pre allocation
#enable_hugepages = true

# Enable file based guest memory support. The default is an empty string which
# will disable this feature. In the case of virtio-fs, this is enabled
# automatically and '/dev/shm' is used as the backing folder.
# This option will be ignored if VM templating is enabled.
#file_mem_backend = ""

# Enable swap of vm memory. Default false.
# The behaviour is undefined if mem_prealloc is also set to true
#enable_swap = true

# This option changes the default hypervisor and kernel parameters
# to enable debug output where available. This extra output is added
# to the proxy logs, but only when proxy debug is also enabled.
# 
# Default false
#enable_debug = true

# Disable the customizations done in the runtime when it detects
# that it is running on top a VMM. This will result in the runtime
# behaving as it would when running on bare metal.
# 
#disable_nesting_checks = true

# This is the msize used for 9p shares. It is the number of bytes 
# used for 9p packet payload.
#msize_9p = 8192

# If true and vsocks are supported, use vsocks to communicate directly
# with the agent and no proxy is started, otherwise use unix
# sockets and start a proxy to communicate with the agent.
# Default false
#use_vsock = true

# If false and nvdimm is supported, use nvdimm device to plug guest image.
# Otherwise virtio-block device is used.
# Default is false
#disable_image_nvdimm = true

# VFIO devices are hotplugged on a bridge by default. 
# Enable hotplugging on root bus. This may be required for devices with
# a large PCI bar, as this is a current limitation with hotplugging on 
# a bridge. This value is valid for "pc" machine type.
# Default false
#hotplug_vfio_on_root_bus = true

# Before hot plugging a PCIe device, you need to add a pcie_root_port device.
# Use this parameter when using some large PCI bar devices, such as Nvidia GPU
# The value means the number of pcie_root_port
# This value is valid when hotplug_vfio_on_root_bus is true and machine_type is "q35"
# Default 0
#pcie_root_port = 2

# If vhost-net backend for virtio-net is not desired, set to true. Default is false, which trades off
# security (vhost-net runs ring0) for network I/O performance. 
#disable_vhost_net = true

#
# Default entropy source.
# The path to a host source of entropy (including a real hardware RNG)
# /dev/urandom and /dev/random are two main options.
# Be aware that /dev/random is a blocking source of entropy.  If the host
# runs out of entropy, the VMs boot time will increase leading to get startup
# timeouts.
# The source of entropy /dev/urandom is non-blocking and provides a
# generally acceptable source of entropy. It should work well for pretty much
# all practical purposes.
#entropy_source= "/dev/urandom"

# Path to OCI hook binaries in the *guest rootfs*.
# This does not affect host-side hooks which must instead be added to
# the OCI spec passed to the runtime.
#
# You can create a rootfs with hooks by customizing the osbuilder scripts:
# https://github.com/kata-containers/osbuilder
#
# Hooks must be stored in a subdirectory of guest_hook_path according to their
# hook type, i.e. "guest_hook_path/{prestart,postart,poststop}".
# The agent will scan these directories for executable files and add them, in
# lexicographical order, to the lifecycle of the guest container.
# Hooks are executed in the runtime namespace of the guest. See the official documentation:
# https://github.com/opencontainers/runtime-spec/blob/v1.0.1/config.md#posix-platform-hooks
# Warnings will be logged if any error is encountered will scanning for hooks,
# but it will not abort container execution.
#guest_hook_path = "/usr/share/oci/hooks"

[factory]
# VM templating support. Once enabled, new VMs are created from template
# using vm cloning. They will share the same initial kernel, initramfs and
# agent memory by mapping it readonly. It helps speeding up new container
# creation and saves a lot of memory if there are many kata containers running
# on the same host.
#
# When disabled, new VMs are created from scratch.
#
# Note: Requires "initrd=" to be set ("image=" is not supported).
#
# Default false
#enable_template = true

# Specifies the path of template.
#
# Default "/run/vc/vm/template"
#template_path = "/run/vc/vm/template"

# The number of caches of VMCache:
# unspecified or == 0   --> VMCache is disabled
# > 0                   --> will be set to the specified number
#
# VMCache is a function that creates VMs as caches before using it.
# It helps speed up new container creation.
# The function consists of a server and some clients communicating
# through Unix socket.  The protocol is gRPC in protocols/cache/cache.proto.
# The VMCache server will create some VMs and cache them by factory cache.
# It will convert the VM to gRPC format and transport it when gets
# requestion from clients.
# Factory grpccache is the VMCache client.  It will request gRPC format
# VM and convert it back to a VM.  If VMCache function is enabled,
# kata-runtime will request VM from factory grpccache when it creates
# a new sandbox.
#
# Default 0
#vm_cache_number = 0

# Specify the address of the Unix socket that is used by VMCache.
#
# Default /var/run/kata-containers/cache.sock
#vm_cache_endpoint = "/var/run/kata-containers/cache.sock"

[proxy.kata]
path = "/usr/libexec/kata-containers/kata-proxy"

# If enabled, proxy messages will be sent to the system log
# (default: disabled)
#enable_debug = true

[shim.kata]
path = "/usr/libexec/kata-containers/kata-shim"

# If enabled, shim messages will be sent to the system log
# (default: disabled)
#enable_debug = true

# If enabled, the shim will create opentracing.io traces and spans.
# (See https://www.jaegertracing.io/docs/getting-started).
#
# Note: By default, the shim runs in a separate network namespace. Therefore,
# to allow it to send trace details to the Jaeger agent running on the host,
# it is necessary to set 'disable_new_netns=true' so that it runs in the host
# network namespace.
#
# (default: disabled)
#enable_tracing = true

[agent.kata]
# If enabled, make the agent display debug-level messages.
# (default: disabled)
#enable_debug = true

# Enable agent tracing.
#
# If enabled, the default trace mode is "dynamic" and the
# default trace type is "isolated". The trace mode and type are set
# explicity with the `trace_type=` and `trace_mode=` options.
#
# Notes:
#
# - Tracing is ONLY enabled when `enable_tracing` is set: explicitly
#   setting `trace_mode=` and/or `trace_type=` without setting `enable_tracing`
#   will NOT activate agent tracing.
#
# - See https://github.com/kata-containers/agent/blob/master/TRACING.md for
#   full details.
#
# (default: disabled)
#enable_tracing = true
#
#trace_mode = "dynamic"
#trace_type = "isolated"

# Comma separated list of kernel modules and their parameters.
# These modules will be loaded in the guest kernel using modprobe(8).
# The following example can be used to load two kernel modules with parameters
#  - kernel_modules=["e1000e InterruptThrottleRate=3000,3000,3000 EEE=1", "i915 enable_ppgtt=0"]
# The first word is considered as the module name and the rest as its parameters.
# Container will not be started when:
#  * A kernel module is specified and the modprobe command is not installed in the guest
#    or it fails loading the module.
#  * The module is not available in the guest or it doesn't met the guest kernel
#    requirements, like architecture and version.
#
kernel_modules=[]


[netmon]
# If enabled, the network monitoring process gets started when the
# sandbox is created. This allows for the detection of some additional
# network being added to the existing network namespace, after the
# sandbox has been created.
# (default: disabled)
#enable_netmon = true

# Specify the path to the netmon binary.
path = "/usr/libexec/kata-containers/kata-netmon"

# If enabled, netmon messages will be sent to the system log
# (default: disabled)
#enable_debug = true

[runtime]
# If enabled, the runtime will log additional debug messages to the
# system log
# (default: disabled)
#enable_debug = true
#
# Internetworking model
# Determines how the VM should be connected to the
# the container network interface
# Options:
#
#   - macvtap
#     Used when the Container network interface can be bridged using
#     macvtap.
#
#   - none
#     Used when customize network. Only creates a tap device. No veth pair.
#
#   - tcfilter
#     Uses tc filter rules to redirect traffic from the network interface
#     provided by plugin to a tap interface connected to the VM.
#
internetworking_model="tcfilter"

# disable guest seccomp
# Determines whether container seccomp profiles are passed to the virtual
# machine and applied by the kata agent. If set to true, seccomp is not applied
# within the guest
# (default: true)
disable_guest_seccomp=true

# If enabled, the runtime will create opentracing.io traces and spans.
# (See https://www.jaegertracing.io/docs/getting-started).
# (default: disabled)
#enable_tracing = true

# If enabled, the runtime will not create a network namespace for shim and hypervisor processes.
# This option may have some potential impacts to your host. It should only be used when you know what you're doing.
# `disable_new_netns` conflicts with `enable_netmon`
# `disable_new_netns` conflicts with `internetworking_model=tcfilter` and `internetworking_model=macvtap`. It works only
# with `internetworking_model=none`. The tap device will be in the host network namespace and can connect to a bridge
# (like OVS) directly.
# If you are using docker, `disable_new_netns` only works with `docker run --net=none`
# (default: false)
#disable_new_netns = true

# if enabled, the runtime will add all the kata processes inside one dedicated cgroup.
# The container cgroups in the host are not created, just one single cgroup per sandbox.
# The runtime caller is free to restrict or collect cgroup stats of the overall Kata sandbox.
# The sandbox cgroup path is the parent cgroup of a container with the PodSandbox annotation.
# The sandbox cgroup is constrained if there is no container type annotation.
# See: https://godoc.org/github.com/kata-containers/runtime/virtcontainers#ContainerType
sandbox_cgroup_only=false

# Enabled experimental feature list, format: ["a", "b"].
# Experimental features are features not stable enough for production,
# they may break compatibility, and are prepared for a big version bump.
# Supported experimental features:
# (default: [])
experimental=[]

KSM throttler

version

Output of "/usr/libexec/kata-ksm-throttler/kata-ksm-throttler --version":

kata-ksm-throttler version 1.11.0-alpha0-e0cd739

systemd service

Image details

---
osbuilder:
  url: "https://github.com/kata-containers/osbuilder"
  version: "unknown"
rootfs-creation-time: "2020-03-11T10:40:44.941233817+0000Z"
description: "osbuilder rootfs"
file-format-version: "0.0.2"
architecture: "x86_64"
base-distro:
  name: "bionic"
  version: "18.04"
  packages:
    default:
      - "systemd,iptables,init,chrony,kmod,bash,coreutils,net-tools,network-manager"
    extra:
      - "bash"
      - "coreutils"
      - "net-tools"
      - "network-manager"
agent:
  url: "https://github.com/kata-containers/agent"
  name: "kata-agent"
  version: "1.10.0-rc0-36b37f678aa87701028e8182e1b837a439aef5fb"
  agent-is-init-daemon: "no"

Initrd details

No initrd


Logfiles

Runtime logs

Recent runtime problems found in system journal:

time="2020-03-12T18:25:52.602927167+09:00" level=info msg="{\"error\": {\"class\": \"GenericError\", \"desc\": \"Bus 'pcie.0' does not support hotplugging\"}}" arch=amd64 command=create container=5b8e522d935ae7c6860c28c229b9c424c226d5e770195c0c955fa025c05c12d9 name=kata-runtime pid=11714 source=virtcontainers subsystem=qmp
time="2020-03-12T18:25:52.602983656+09:00" level=error msg="failed to hotplug VFIO device" arch=amd64 command=create container=5b8e522d935ae7c6860c28c229b9c424c226d5e770195c0c955fa025c05c12d9 error="QMP command failed: Bus 'pcie.0' does not support hotplugging" name=kata-runtime pid=11714 sandbox=5b8e522d935ae7c6860c28c229b9c424c226d5e770195c0c955fa025c05c12d9 source=virtcontainers subsystem=sandbox vfio-device-BDF="00:01.0" vfio-device-ID=vfio-1a834a602eb5b28d0
time="2020-03-12T18:25:52.603023781+09:00" level=error msg="Failed to add device" arch=amd64 command=create container=5b8e522d935ae7c6860c28c229b9c424c226d5e770195c0c955fa025c05c12d9 error="QMP command failed: Bus 'pcie.0' does not support hotplugging" name=kata-runtime pid=11714 source=virtcontainers subsystem=device
time="2020-03-12T18:25:52.603061271+09:00" level=warning msg="no such file or directory: /run/kata-containers/shared/sandboxes/5b8e522d935ae7c6860c28c229b9c424c226d5e770195c0c955fa025c05c12d9/5b8e522d935ae7c6860c28c229b9c424c226d5e770195c0c955fa025c05c12d9/rootfs"
time="2020-03-12T18:25:52.649911387+09:00" level=info msg="sanner return error: read unix @->/run/vc/vm/5b8e522d935ae7c6860c28c229b9c424c226d5e770195c0c955fa025c05c12d9/qmp.sock: read: connection reset by peer" arch=amd64 command=create container=5b8e522d935ae7c6860c28c229b9c424c226d5e770195c0c955fa025c05c12d9 name=kata-runtime pid=11714 source=virtcontainers subsystem=qmp
time="2020-03-12T18:25:52.723598644+09:00" level=warning msg="sandbox cgroups path is empty" arch=amd64 command=create container=5b8e522d935ae7c6860c28c229b9c424c226d5e770195c0c955fa025c05c12d9 name=kata-runtime pid=11714 sandbox=5b8e522d935ae7c6860c28c229b9c424c226d5e770195c0c955fa025c05c12d9 source=virtcontainers subsystem=sandbox
time="2020-03-12T18:25:52.723948043+09:00" level=warning msg="failed to cleanup netns" arch=amd64 command=create container=5b8e522d935ae7c6860c28c229b9c424c226d5e770195c0c955fa025c05c12d9 error="failed to get netns /var/run/netns/cnitest-65d3f0c8-2f87-f626-be10-2f1423623748: failed to Statfs \"/var/run/netns/cnitest-65d3f0c8-2f87-f626-be10-2f1423623748\": no such file or directory" name=kata-runtime path=/var/run/netns/cnitest-65d3f0c8-2f87-f626-be10-2f1423623748 pid=11714 source=katautils
time="2020-03-12T18:25:52.724089591+09:00" level=error msg="QMP command failed: Bus 'pcie.0' does not support hotplugging" arch=amd64 command=create container=5b8e522d935ae7c6860c28c229b9c424c226d5e770195c0c955fa025c05c12d9 name=kata-runtime pid=11714 source=runtime
time="2020-03-13T08:41:56.513697995+09:00" level=info msg="sanner return error: read unix @->/run/vc/vm/e69dd6ef9ba749c3ac13663eddcd840adbe3fc5cf1e98b9ee1c64fe829631338/qmp.sock: use of closed network connection" arch=amd64 command=create container=e69dd6ef9ba749c3ac13663eddcd840adbe3fc5cf1e98b9ee1c64fe829631338 name=kata-runtime pid=10987 source=virtcontainers subsystem=qmp
time="2020-03-13T08:41:57.212713906+09:00" level=error msg=Attach arch=amd64 command=create container=e69dd6ef9ba749c3ac13663eddcd840adbe3fc5cf1e98b9ee1c64fe829631338 iommuDevicesPath=/sys/kernel/iommu_groups/1/devices name=kata-runtime pid=10987 source=virtcontainers subsystem=device
time="2020-03-13T08:41:57.21289313+09:00" level=error msg=Attach arch=amd64 command=create container=e69dd6ef9ba749c3ac13663eddcd840adbe3fc5cf1e98b9ee1c64fe829631338 deviceFiles="[0xc0003cb930 0xc0003cba00 0xc0003cbad0]" name=kata-runtime pid=10987 source=virtcontainers subsystem=device
time="2020-03-13T08:41:57.213059815+09:00" level=error msg=Attach arch=amd64 command=create container=e69dd6ef9ba749c3ac13663eddcd840adbe3fc5cf1e98b9ee1c64fe829631338 deviceFile="0000:00:01.0" name=kata-runtime pid=10987 source=virtcontainers subsystem=device
time="2020-03-13T08:41:57.213151453+09:00" level=error msg=Attach arch=amd64 command=create container=e69dd6ef9ba749c3ac13663eddcd840adbe3fc5cf1e98b9ee1c64fe829631338 deviceBDF="00:01.0" name=kata-runtime pid=10987 source=virtcontainers subsystem=device
time="2020-03-13T08:41:57.213224343+09:00" level=error msg=Attach arch=amd64 command=create container=e69dd6ef9ba749c3ac13663eddcd840adbe3fc5cf1e98b9ee1c64fe829631338 deviceSysfsDev= name=kata-runtime pid=10987 source=virtcontainers subsystem=device
time="2020-03-13T08:41:57.213295395+09:00" level=error msg=Attach arch=amd64 command=create container=e69dd6ef9ba749c3ac13663eddcd840adbe3fc5cf1e98b9ee1c64fe829631338 name=kata-runtime pid=10987 source=virtcontainers subsystem=device vfioDeviceType=1
time="2020-03-13T08:41:57.213366904+09:00" level=error msg=isPCIeDevice arch=amd64 bdf="00:01.0" command=create container=e69dd6ef9ba749c3ac13663eddcd840adbe3fc5cf1e98b9ee1c64fe829631338 name=kata-runtime pid=10987 source=virtcontainers subsystem=device
time="2020-03-13T08:41:57.213418712+09:00" level=error msg=isPCIeDevice arch=amd64 command=create config.SysBusPciSlotsPath=/sys/bus/pci/slots container=e69dd6ef9ba749c3ac13663eddcd840adbe3fc5cf1e98b9ee1c64fe829631338 name=kata-runtime pid=10987 source=virtcontainers subsystem=device
time="2020-03-13T08:41:57.213500182+09:00" level=error msg=isPCIeDevice arch=amd64 command=create container=e69dd6ef9ba749c3ac13663eddcd840adbe3fc5cf1e98b9ee1c64fe829631338 name=kata-runtime pid=10987 slots="[]" source=virtcontainers subsystem=device
time="2020-03-13T08:41:57.213672931+09:00" level=error msg=Attach arch=amd64 command=create container=e69dd6ef9ba749c3ac13663eddcd840adbe3fc5cf1e98b9ee1c64fe829631338 name=kata-runtime pid=10987 source=virtcontainers subsystem=device vfio.IsPCIe=false
time="2020-03-13T08:41:57.213731964+09:00" level=error msg=Attach arch=amd64 command=create container=e69dd6ef9ba749c3ac13663eddcd840adbe3fc5cf1e98b9ee1c64fe829631338 deviceFile="0000:01:00.0" name=kata-runtime pid=10987 source=virtcontainers subsystem=device
time="2020-03-13T08:41:57.21379391+09:00" level=error msg=Attach arch=amd64 command=create container=e69dd6ef9ba749c3ac13663eddcd840adbe3fc5cf1e98b9ee1c64fe829631338 deviceBDF="01:00.0" name=kata-runtime pid=10987 source=virtcontainers subsystem=device
time="2020-03-13T08:41:57.21384387+09:00" level=error msg=Attach arch=amd64 command=create container=e69dd6ef9ba749c3ac13663eddcd840adbe3fc5cf1e98b9ee1c64fe829631338 deviceSysfsDev= name=kata-runtime pid=10987 source=virtcontainers subsystem=device
time="2020-03-13T08:41:57.213889953+09:00" level=error msg=Attach arch=amd64 command=create container=e69dd6ef9ba749c3ac13663eddcd840adbe3fc5cf1e98b9ee1c64fe829631338 name=kata-runtime pid=10987 source=virtcontainers subsystem=device vfioDeviceType=1
time="2020-03-13T08:41:57.213945113+09:00" level=error msg=isPCIeDevice arch=amd64 bdf="01:00.0" command=create container=e69dd6ef9ba749c3ac13663eddcd840adbe3fc5cf1e98b9ee1c64fe829631338 name=kata-runtime pid=10987 source=virtcontainers subsystem=device
time="2020-03-13T08:41:57.213992938+09:00" level=error msg=isPCIeDevice arch=amd64 command=create config.SysBusPciSlotsPath=/sys/bus/pci/slots container=e69dd6ef9ba749c3ac13663eddcd840adbe3fc5cf1e98b9ee1c64fe829631338 name=kata-runtime pid=10987 source=virtcontainers subsystem=device
time="2020-03-13T08:41:57.214070596+09:00" level=error msg=isPCIeDevice arch=amd64 command=create container=e69dd6ef9ba749c3ac13663eddcd840adbe3fc5cf1e98b9ee1c64fe829631338 name=kata-runtime pid=10987 slots="[]" source=virtcontainers subsystem=device
time="2020-03-13T08:41:57.214235927+09:00" level=error msg=Attach arch=amd64 command=create container=e69dd6ef9ba749c3ac13663eddcd840adbe3fc5cf1e98b9ee1c64fe829631338 name=kata-runtime pid=10987 source=virtcontainers subsystem=device vfio.IsPCIe=false
time="2020-03-13T08:41:57.214301777+09:00" level=error msg=Attach arch=amd64 command=create container=e69dd6ef9ba749c3ac13663eddcd840adbe3fc5cf1e98b9ee1c64fe829631338 deviceFile="0000:01:00.1" name=kata-runtime pid=10987 source=virtcontainers subsystem=device
time="2020-03-13T08:41:57.214351388+09:00" level=error msg=Attach arch=amd64 command=create container=e69dd6ef9ba749c3ac13663eddcd840adbe3fc5cf1e98b9ee1c64fe829631338 deviceBDF="01:00.1" name=kata-runtime pid=10987 source=virtcontainers subsystem=device
time="2020-03-13T08:41:57.214398668+09:00" level=error msg=Attach arch=amd64 command=create container=e69dd6ef9ba749c3ac13663eddcd840adbe3fc5cf1e98b9ee1c64fe829631338 deviceSysfsDev= name=kata-runtime pid=10987 source=virtcontainers subsystem=device
time="2020-03-13T08:41:57.214449587+09:00" level=error msg=Attach arch=amd64 command=create container=e69dd6ef9ba749c3ac13663eddcd840adbe3fc5cf1e98b9ee1c64fe829631338 name=kata-runtime pid=10987 source=virtcontainers subsystem=device vfioDeviceType=1
time="2020-03-13T08:41:57.214499939+09:00" level=error msg=isPCIeDevice arch=amd64 bdf="01:00.1" command=create container=e69dd6ef9ba749c3ac13663eddcd840adbe3fc5cf1e98b9ee1c64fe829631338 name=kata-runtime pid=10987 source=virtcontainers subsystem=device
time="2020-03-13T08:41:57.214547001+09:00" level=error msg=isPCIeDevice arch=amd64 command=create config.SysBusPciSlotsPath=/sys/bus/pci/slots container=e69dd6ef9ba749c3ac13663eddcd840adbe3fc5cf1e98b9ee1c64fe829631338 name=kata-runtime pid=10987 source=virtcontainers subsystem=device
time="2020-03-13T08:41:57.214625892+09:00" level=error msg=isPCIeDevice arch=amd64 command=create container=e69dd6ef9ba749c3ac13663eddcd840adbe3fc5cf1e98b9ee1c64fe829631338 name=kata-runtime pid=10987 slots="[]" source=virtcontainers subsystem=device
time="2020-03-13T08:41:57.214790445+09:00" level=error msg=Attach arch=amd64 command=create container=e69dd6ef9ba749c3ac13663eddcd840adbe3fc5cf1e98b9ee1c64fe829631338 name=kata-runtime pid=10987 source=virtcontainers subsystem=device vfio.IsPCIe=false
time="2020-03-13T08:41:57.216457744+09:00" level=error msg="hotplugVFIODevice!!!" ***device.Bus= arch=amd64 command=create container=e69dd6ef9ba749c3ac13663eddcd840adbe3fc5cf1e98b9ee1c64fe829631338 name=kata-runtime pid=10987 source=virtcontainers subsystem=qemu
time="2020-03-13T08:41:57.216507428+09:00" level=error msg="hotplugVFIODevice!!!" arch=amd64 command=create container=e69dd6ef9ba749c3ac13663eddcd840adbe3fc5cf1e98b9ee1c64fe829631338 name=kata-runtime pid=10987 q.state.HotplugVFIOOnRootBus=true source=virtcontainers subsystem=qemu
time="2020-03-13T08:41:57.216582247+09:00" level=error msg="hotplugVFIODevice!!!" arch=amd64 command=create container=e69dd6ef9ba749c3ac13663eddcd840adbe3fc5cf1e98b9ee1c64fe829631338 machinneType=q35 name=kata-runtime pid=10987 source=virtcontainers subsystem=qemu
time="2020-03-13T08:41:57.216636689+09:00" level=error msg="hotplugVFIODevice!!!" arch=amd64 command=create container=e69dd6ef9ba749c3ac13663eddcd840adbe3fc5cf1e98b9ee1c64fe829631338 device.IsPCIe=false name=kata-runtime pid=10987 source=virtcontainers subsystem=qemu
time="2020-03-13T08:41:57.216712816+09:00" level=error msg="hotplugVFIODevice!!!" arch=amd64 command=create container=e69dd6ef9ba749c3ac13663eddcd840adbe3fc5cf1e98b9ee1c64fe829631338 name=kata-runtime pid=10987 q.state.PCIeRootPort=2 source=virtcontainers subsystem=qemu
time="2020-03-13T08:41:57.216759638+09:00" level=error msg="hotplugVFIODevice!!!" arch=amd64 command=create container=e69dd6ef9ba749c3ac13663eddcd840adbe3fc5cf1e98b9ee1c64fe829631338 device.Bus= name=kata-runtime pid=10987 source=virtcontainers subsystem=qemu
time="2020-03-13T08:41:57.216818147+09:00" level=error msg="hotplugVFIODevice!!!" arch=amd64 command=create container=e69dd6ef9ba749c3ac13663eddcd840adbe3fc5cf1e98b9ee1c64fe829631338 device.Type=1 name=kata-runtime pid=10987 source=virtcontainers subsystem=qemu
time="2020-03-13T08:41:57.22137848+09:00" level=info msg="{\"error\": {\"class\": \"GenericError\", \"desc\": \"Bus 'pcie.0' does not support hotplugging\"}}" arch=amd64 command=create container=e69dd6ef9ba749c3ac13663eddcd840adbe3fc5cf1e98b9ee1c64fe829631338 name=kata-runtime pid=10987 source=virtcontainers subsystem=qmp
time="2020-03-13T08:41:57.221591805+09:00" level=error msg="failed to hotplug VFIO device" arch=amd64 command=create container=e69dd6ef9ba749c3ac13663eddcd840adbe3fc5cf1e98b9ee1c64fe829631338 error="QMP command failed: Bus 'pcie.0' does not support hotplugging" name=kata-runtime pid=10987 sandbox=e69dd6ef9ba749c3ac13663eddcd840adbe3fc5cf1e98b9ee1c64fe829631338 source=virtcontainers subsystem=sandbox vfio-device-BDF="00:01.0" vfio-device-ID=vfio-96dc385aecd37bad0
time="2020-03-13T08:41:57.221714168+09:00" level=error msg="Failed to add device" arch=amd64 command=create container=e69dd6ef9ba749c3ac13663eddcd840adbe3fc5cf1e98b9ee1c64fe829631338 error="QMP command failed: Bus 'pcie.0' does not support hotplugging" name=kata-runtime pid=10987 source=virtcontainers subsystem=device
time="2020-03-13T08:41:57.221809869+09:00" level=warning msg="no such file or directory: /run/kata-containers/shared/sandboxes/e69dd6ef9ba749c3ac13663eddcd840adbe3fc5cf1e98b9ee1c64fe829631338/e69dd6ef9ba749c3ac13663eddcd840adbe3fc5cf1e98b9ee1c64fe829631338/rootfs"
time="2020-03-13T08:41:57.288875338+09:00" level=info msg="sanner return error: read unix @->/run/vc/vm/e69dd6ef9ba749c3ac13663eddcd840adbe3fc5cf1e98b9ee1c64fe829631338/qmp.sock: read: connection reset by peer" arch=amd64 command=create container=e69dd6ef9ba749c3ac13663eddcd840adbe3fc5cf1e98b9ee1c64fe829631338 name=kata-runtime pid=10987 source=virtcontainers subsystem=qmp
time="2020-03-13T08:41:57.384379844+09:00" level=warning msg="sandbox cgroups path is empty" arch=amd64 command=create container=e69dd6ef9ba749c3ac13663eddcd840adbe3fc5cf1e98b9ee1c64fe829631338 name=kata-runtime pid=10987 sandbox=e69dd6ef9ba749c3ac13663eddcd840adbe3fc5cf1e98b9ee1c64fe829631338 source=virtcontainers subsystem=sandbox
time="2020-03-13T08:41:57.384921984+09:00" level=warning msg="failed to cleanup netns" arch=amd64 command=create container=e69dd6ef9ba749c3ac13663eddcd840adbe3fc5cf1e98b9ee1c64fe829631338 error="failed to get netns /var/run/netns/cnitest-3ebaf7cf-c4fb-9ac0-82c8-b1b6bf0634b9: failed to Statfs \"/var/run/netns/cnitest-3ebaf7cf-c4fb-9ac0-82c8-b1b6bf0634b9\": no such file or directory" name=kata-runtime path=/var/run/netns/cnitest-3ebaf7cf-c4fb-9ac0-82c8-b1b6bf0634b9 pid=10987 source=katautils
time="2020-03-13T08:41:57.385176236+09:00" level=error msg="QMP command failed: Bus 'pcie.0' does not support hotplugging" arch=amd64 command=create container=e69dd6ef9ba749c3ac13663eddcd840adbe3fc5cf1e98b9ee1c64fe829631338 name=kata-runtime pid=10987 source=runtime

Proxy logs

Recent proxy problems found in system journal:

time="2020-03-12T14:24:18.895496489+09:00" level=fatal msg="failed to handle exit signal" error="close unix @->/run/vc/vm/29f29ce965e752ae01dcb69250e4338ad9135a07103de3311aad857d807d6e4f/kata.sock: use of closed network connection" name=kata-proxy pid=29800 sandbox=29f29ce965e752ae01dcb69250e4338ad9135a07103de3311aad857d807d6e4f source=proxy
time="2020-03-12T14:27:18.346261491+09:00" level=fatal msg="failed to handle exit signal" error="close unix @->/run/vc/vm/b8689e453eb16c2fbb908e06a137bca9f503663b45055a09bad40b4775384886/kata.sock: use of closed network connection" name=kata-proxy pid=3351 sandbox=b8689e453eb16c2fbb908e06a137bca9f503663b45055a09bad40b4775384886 source=proxy
time="2020-03-12T14:31:53.555839648+09:00" level=fatal msg="failed to handle exit signal" error="close unix @->/run/vc/vm/0d3dadc306664806c15d5ce71780bcb21f3db76b3abd014d02e57dfea0abaf42/kata.sock: use of closed network connection" name=kata-proxy pid=11778 sandbox=0d3dadc306664806c15d5ce71780bcb21f3db76b3abd014d02e57dfea0abaf42 source=proxy
time="2020-03-12T14:32:21.212545749+09:00" level=fatal msg="channel error" error="accept unix /run/vc/sbs/64417a4ebce1b44db46b4de3e176e06cc930269f6fba7e4e0d364b257e50a180/proxy.sock: use of closed network connection" name=kata-proxy pid=12698 sandbox=64417a4ebce1b44db46b4de3e176e06cc930269f6fba7e4e0d364b257e50a180 source=proxy
time="2020-03-12T14:33:38.816626221+09:00" level=fatal msg="channel error" error="accept unix /run/vc/sbs/d04c885ff485401338dbebfd1be96910c08674fd79e54867e06a062556c16e58/proxy.sock: use of closed network connection" name=kata-proxy pid=15119 sandbox=d04c885ff485401338dbebfd1be96910c08674fd79e54867e06a062556c16e58 source=proxy
time="2020-03-12T14:34:13.151095713+09:00" level=fatal msg="failed to handle exit signal" error="close unix @->/run/vc/vm/0753f5376fa13d24f1027f62248e6ff647375620508fdc8b28471c25693d9283/kata.sock: use of closed network connection" name=kata-proxy pid=16232 sandbox=0753f5376fa13d24f1027f62248e6ff647375620508fdc8b28471c25693d9283 source=proxy
time="2020-03-12T14:49:12.430755672+09:00" level=fatal msg="failed to handle exit signal" error="close unix @->/run/vc/vm/275198c00562d5046257a542ad517b8371d857c92ef46d32e102e653bc0b3bc2/kata.sock: use of closed network connection" name=kata-proxy pid=17561 sandbox=275198c00562d5046257a542ad517b8371d857c92ef46d32e102e653bc0b3bc2 source=proxy
time="2020-03-12T14:53:06.274766616+09:00" level=fatal msg="channel error" error="accept unix /run/vc/sbs/19117dd8d8b5531fa4c68d3c67d650f0af8e5d0dc55ff8d4d8430db5e974cf1f/proxy.sock: use of closed network connection" name=kata-proxy pid=1833 sandbox=19117dd8d8b5531fa4c68d3c67d650f0af8e5d0dc55ff8d4d8430db5e974cf1f source=proxy
time="2020-03-12T14:54:10.73719904+09:00" level=fatal msg="channel error" error="accept unix /run/vc/sbs/6e9488df097f7b68323d65b94305133a509bea3ca5f743317775131c39197775/proxy.sock: use of closed network connection" name=kata-proxy pid=4322 sandbox=6e9488df097f7b68323d65b94305133a509bea3ca5f743317775131c39197775 source=proxy
time="2020-03-12T18:25:52.609908751+09:00" level=fatal msg="failed to handle exit signal" error="close unix @->/run/vc/vm/5b8e522d935ae7c6860c28c229b9c424c226d5e770195c0c955fa025c05c12d9/kata.sock: use of closed network connection" name=kata-proxy pid=11765 sandbox=5b8e522d935ae7c6860c28c229b9c424c226d5e770195c0c955fa025c05c12d9 source=proxy
time="2020-03-12T18:25:52.609912354+09:00" level=fatal msg="channel error" error="accept unix /run/vc/sbs/5b8e522d935ae7c6860c28c229b9c424c226d5e770195c0c955fa025c05c12d9/proxy.sock: use of closed network connection" name=kata-proxy pid=11765 sandbox=5b8e522d935ae7c6860c28c229b9c424c226d5e770195c0c955fa025c05c12d9 source=proxy
time="2020-03-13T08:41:57.235613631+09:00" level=fatal msg="failed to handle exit signal" error="close unix @->/run/vc/vm/e69dd6ef9ba749c3ac13663eddcd840adbe3fc5cf1e98b9ee1c64fe829631338/kata.sock: use of closed network connection" name=kata-proxy pid=11043 sandbox=e69dd6ef9ba749c3ac13663eddcd840adbe3fc5cf1e98b9ee1c64fe829631338 source=proxy

Shim logs

No recent shim problems found in system journal.

Throttler logs

No recent throttler problems found in system journal.


Container manager details

Have docker

Docker

Output of "docker version":

Client: Docker Engine - Community
 Version:           19.03.6
 API version:       1.40
 Go version:        go1.12.16
 Git commit:        369ce74a3c
 Built:             Thu Feb 13 01:27:49 2020
 OS/Arch:           linux/amd64
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          19.03.6
  API version:      1.40 (minimum version 1.12)
  Go version:       go1.12.16
  Git commit:       369ce74a3c
  Built:            Thu Feb 13 01:26:21 2020
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.2.12
  GitCommit:        35bd7a5f69c13e1563af8a93431411cd9ecf5021
 runc:
  Version:          v1.0.0-rc8
  GitCommit:        425e105d5a03fabd737a126ad93d62a9eeede87f
 docker-init:
  Version:          0.18.0
  GitCommit:        fec3683

Output of "docker info":

Client:
 Debug Mode: false

Server:
 Containers: 4
  Running: 0
  Paused: 0
  Stopped: 4
 Images: 27
 Server Version: 19.03.6
 Storage Driver: overlay2
  Backing Filesystem: extfs
  Supports d_type: true
  Native Overlay Diff: true
 Logging Driver: json-file
 Cgroup Driver: cgroupfs
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
 Swarm: inactive
 Runtimes: kata-runtime nvidia runc
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: 35bd7a5f69c13e1563af8a93431411cd9ecf5021
 runc version: 425e105d5a03fabd737a126ad93d62a9eeede87f
 init version: fec3683
 Security Options:
  apparmor
  seccomp
   Profile: default
 Kernel Version: 5.3.0-40-generic
 Operating System: Ubuntu 18.04.4 LTS
 OSType: linux
 Architecture: x86_64
 CPUs: 8
 Total Memory: 15.65GiB
 Name: wansu-ubuntu
 ID: YHYJ:HGRZ:UAEB:VU7O:5OQ3:EZZF:UVUW:3R42:RW6Y:3O6X:3WK5:JEY7
 Docker Root Dir: /var/lib/docker
 Debug Mode: false
 Registry: https://index.docker.io/v1/
 Labels:
 Experimental: false
 Insecure Registries:
  127.0.0.0/8
 Live Restore Enabled: false

WARNING: No swap limit support

Output of "systemctl show docker":

Type=notify
Restart=always
NotifyAccess=main
RestartUSec=2s
TimeoutStartUSec=infinity
TimeoutStopUSec=infinity
RuntimeMaxUSec=infinity
WatchdogUSec=0
WatchdogTimestamp=Fri 2020-03-13 08:38:21 KST
WatchdogTimestampMonotonic=22131587
PermissionsStartOnly=no
RootDirectoryStartOnly=no
RemainAfterExit=no
GuessMainPID=yes
MainPID=2761
ControlPID=0
FileDescriptorStoreMax=0
NFileDescriptorStore=0
StatusErrno=0
Result=success
UID=[not set]
GID=[not set]
NRestarts=0
ExecMainStartTimestamp=Fri 2020-03-13 08:38:20 KST
ExecMainStartTimestampMonotonic=21722253
ExecMainExitTimestampMonotonic=0
ExecMainPID=2761
ExecMainCode=0
ExecMainStatus=0
ExecStart={ path=/usr/bin/dockerd ; argv[]=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock ; ignore_errors=no ; start_time=[Fri 2020-03-13 08:38:20 KST] ; stop_time=[n/a] ; pid=2761 ; code=(null) ; status=0/0 }
ExecReload={ path=/bin/kill ; argv[]=/bin/kill -s HUP $MAINPID ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }
Slice=system.slice
ControlGroup=/system.slice/docker.service
MemoryCurrent=[not set]
CPUUsageNSec=[not set]
TasksCurrent=18
IPIngressBytes=18446744073709551615
IPIngressPackets=18446744073709551615
IPEgressBytes=18446744073709551615
IPEgressPackets=18446744073709551615
Delegate=yes
DelegateControllers=cpu cpuacct io blkio memory devices pids
CPUAccounting=no
CPUWeight=[not set]
StartupCPUWeight=[not set]
CPUShares=[not set]
StartupCPUShares=[not set]
CPUQuotaPerSecUSec=infinity
IOAccounting=no
IOWeight=[not set]
StartupIOWeight=[not set]
BlockIOAccounting=no
BlockIOWeight=[not set]
StartupBlockIOWeight=[not set]
MemoryAccounting=no
MemoryLow=0
MemoryHigh=infinity
MemoryMax=infinity
MemorySwapMax=infinity
MemoryLimit=infinity
DevicePolicy=auto
TasksAccounting=yes
TasksMax=infinity
IPAccounting=no
UMask=0022
LimitCPU=infinity
LimitCPUSoft=infinity
LimitFSIZE=infinity
LimitFSIZESoft=infinity
LimitDATA=infinity
LimitDATASoft=infinity
LimitSTACK=infinity
LimitSTACKSoft=8388608
LimitCORE=infinity
LimitCORESoft=infinity
LimitRSS=infinity
LimitRSSSoft=infinity
LimitNOFILE=infinity
LimitNOFILESoft=infinity
LimitAS=infinity
LimitASSoft=infinity
LimitNPROC=infinity
LimitNPROCSoft=infinity
LimitMEMLOCK=16777216
LimitMEMLOCKSoft=16777216
LimitLOCKS=infinity
LimitLOCKSSoft=infinity
LimitSIGPENDING=63515
LimitSIGPENDINGSoft=63515
LimitMSGQUEUE=819200
LimitMSGQUEUESoft=819200
LimitNICE=0
LimitNICESoft=0
LimitRTPRIO=0
LimitRTPRIOSoft=0
LimitRTTIME=infinity
LimitRTTIMESoft=infinity
OOMScoreAdjust=0
Nice=0
IOSchedulingClass=0
IOSchedulingPriority=0
CPUSchedulingPolicy=0
CPUSchedulingPriority=0
TimerSlackNSec=50000
CPUSchedulingResetOnFork=no
NonBlocking=no
StandardInput=null
StandardInputData=
StandardOutput=journal
StandardError=inherit
TTYReset=no
TTYVHangup=no
TTYVTDisallocate=no
SyslogPriority=30
SyslogLevelPrefix=yes
SyslogLevel=6
SyslogFacility=3
LogLevelMax=-1
SecureBits=0
CapabilityBoundingSet=cap_chown cap_dac_override cap_dac_read_search cap_fowner cap_fsetid cap_kill cap_setgid cap_setuid cap_setpcap cap_linux_immutable cap_net_bind_service cap_net_broadcast cap_net_admin cap_net_raw cap_ipc_lock cap_ipc_owner cap_sys_module cap_sys_rawio cap_sys_chroot cap_sys_ptrace cap_sys_pacct cap_sys_admin cap_sys_boot cap_sys_nice cap_sys_resource cap_sys_time cap_sys_tty_config cap_mknod cap_lease cap_audit_write cap_audit_control cap_setfcap cap_mac_override cap_mac_admin cap_syslog cap_wake_alarm cap_block_suspend
AmbientCapabilities=
DynamicUser=no
RemoveIPC=no
MountFlags=
PrivateTmp=no
PrivateDevices=no
ProtectKernelTunables=no
ProtectKernelModules=no
ProtectControlGroups=no
PrivateNetwork=no
PrivateUsers=no
ProtectHome=no
ProtectSystem=no
SameProcessGroup=no
UtmpMode=init
IgnoreSIGPIPE=yes
NoNewPrivileges=no
SystemCallErrorNumber=0
LockPersonality=no
RuntimeDirectoryPreserve=no
RuntimeDirectoryMode=0755
StateDirectoryMode=0755
CacheDirectoryMode=0755
LogsDirectoryMode=0755
ConfigurationDirectoryMode=0755
MemoryDenyWriteExecute=no
RestrictRealtime=no
RestrictSUIDSGID=no
RestrictNamespaces=no
MountAPIVFS=no
KeyringMode=private
KillMode=process
KillSignal=15
SendSIGKILL=yes
SendSIGHUP=no
Id=docker.service
Names=docker.service
Requires=docker.socket sysinit.target system.slice
Wants=network-online.target
BindsTo=containerd.service
WantedBy=multi-user.target
ConsistsOf=docker.socket
Conflicts=shutdown.target
Before=shutdown.target multi-user.target
After=containerd.service system.slice basic.target docker.socket network-online.target firewalld.service systemd-journald.socket sysinit.target
TriggeredBy=docker.socket
Documentation=https://docs.docker.com
Description=Docker Application Container Engine
LoadState=loaded
ActiveState=active
SubState=running
FragmentPath=/lib/systemd/system/docker.service
UnitFileState=enabled
UnitFilePreset=enabled
StateChangeTimestamp=Fri 2020-03-13 08:38:21 KST
StateChangeTimestampMonotonic=22131589
InactiveExitTimestamp=Fri 2020-03-13 08:38:20 KST
InactiveExitTimestampMonotonic=21722274
ActiveEnterTimestamp=Fri 2020-03-13 08:38:21 KST
ActiveEnterTimestampMonotonic=22131589
ActiveExitTimestampMonotonic=0
InactiveEnterTimestampMonotonic=0
CanStart=yes
CanStop=yes
CanReload=yes
CanIsolate=no
StopWhenUnneeded=no
RefuseManualStart=no
RefuseManualStop=no
AllowIsolate=no
DefaultDependencies=yes
OnFailureJobMode=replace
IgnoreOnIsolate=no
NeedDaemonReload=no
JobTimeoutUSec=infinity
JobRunningTimeoutUSec=infinity
JobTimeoutAction=none
ConditionResult=yes
AssertResult=yes
ConditionTimestamp=Fri 2020-03-13 08:38:20 KST
ConditionTimestampMonotonic=21721665
AssertTimestamp=Fri 2020-03-13 08:38:20 KST
AssertTimestampMonotonic=21721666
Transient=no
Perpetual=no
StartLimitIntervalUSec=1min
StartLimitBurst=3
StartLimitAction=none
FailureAction=none
SuccessAction=none
InvocationID=f2cd0107fbc249e980737edfe9a562e7
CollectMode=inactive

Have kubectl

Kubernetes

Output of "kubectl version":

Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.3", GitCommit:"06ad960bfd03b39c8310aaf92d1e7c12ce618213", GitTreeState:"clean", BuildDate:"2020-02-11T18:14:22Z", GoVersion:"go1.13.6", Compiler:"gc", Platform:"linux/amd64"}
The connection to the server localhost:8080 was refused - did you specify the right host or port?

Output of "kubectl config view":

apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

Output of "systemctl show kubelet":

Type=simple
Restart=always
NotifyAccess=none
RestartUSec=10s
TimeoutStartUSec=1min 30s
TimeoutStopUSec=1min 30s
RuntimeMaxUSec=infinity
WatchdogUSec=0
WatchdogTimestamp=Fri 2020-03-13 08:38:14 KST
WatchdogTimestampMonotonic=15464626
PermissionsStartOnly=no
RootDirectoryStartOnly=no
RemainAfterExit=no
GuessMainPID=yes
MainPID=1452
ControlPID=0
FileDescriptorStoreMax=0
NFileDescriptorStore=0
StatusErrno=0
Result=success
UID=[not set]
GID=[not set]
NRestarts=0
ExecMainStartTimestamp=Fri 2020-03-13 08:38:14 KST
ExecMainStartTimestampMonotonic=15464606
ExecMainExitTimestampMonotonic=0
ExecMainPID=1452
ExecMainCode=0
ExecMainStatus=0
ExecStart={ path=/usr/bin/kubelet ; argv[]=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS $KUBELET_CGROUP_ARGS $KUBELET_RUNTIME_ARGS $KUBELET_DNS_ARGS ; ignore_errors=no ; start_time=[Fri 2020-03-13 08:38:14 KST] ; stop_time=[n/a] ; pid=1452 ; code=(null) ; status=0/0 }
Slice=system.slice
ControlGroup=/system.slice/kubelet.service
MemoryCurrent=[not set]
CPUUsageNSec=[not set]
TasksCurrent=28
IPIngressBytes=18446744073709551615
IPIngressPackets=18446744073709551615
IPEgressBytes=18446744073709551615
IPEgressPackets=18446744073709551615
Delegate=no
CPUAccounting=no
CPUWeight=[not set]
StartupCPUWeight=[not set]
CPUShares=[not set]
StartupCPUShares=[not set]
CPUQuotaPerSecUSec=infinity
IOAccounting=no
IOWeight=[not set]
StartupIOWeight=[not set]
BlockIOAccounting=no
BlockIOWeight=[not set]
StartupBlockIOWeight=[not set]
MemoryAccounting=no
MemoryLow=0
MemoryHigh=infinity
MemoryMax=infinity
MemorySwapMax=infinity
MemoryLimit=infinity
DevicePolicy=auto
TasksAccounting=yes
TasksMax=4915
IPAccounting=no
Environment=[unprintable] KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml KUBELET_EXTRA_ARGS=--fail-swap-on=false KUBELET_CGROUP_ARGS=--cgroup-driver=cgroupfs KUBELET_DNS_ARGS=--cluster-dns=10.244.0.0 [unprintable]
EnvironmentFile=/var/lib/kubelet/kubeadm-flags.env (ignore_errors=yes)
EnvironmentFile=/etc/default/kubelet (ignore_errors=yes)
UMask=0022
LimitCPU=infinity
LimitCPUSoft=infinity
LimitFSIZE=infinity
LimitFSIZESoft=infinity
LimitDATA=infinity
LimitDATASoft=infinity
LimitSTACK=infinity
LimitSTACKSoft=8388608
LimitCORE=infinity
LimitCORESoft=0
LimitRSS=infinity
LimitRSSSoft=infinity
LimitNOFILE=4096
LimitNOFILESoft=1024
LimitAS=infinity
LimitASSoft=infinity
LimitNPROC=63515
LimitNPROCSoft=63515
LimitMEMLOCK=16777216
LimitMEMLOCKSoft=16777216
LimitLOCKS=infinity
LimitLOCKSSoft=infinity
LimitSIGPENDING=63515
LimitSIGPENDINGSoft=63515
LimitMSGQUEUE=819200
LimitMSGQUEUESoft=819200
LimitNICE=0
LimitNICESoft=0
LimitRTPRIO=0
LimitRTPRIOSoft=0
LimitRTTIME=infinity
LimitRTTIMESoft=infinity
OOMScoreAdjust=0
Nice=0
IOSchedulingClass=0
IOSchedulingPriority=0
CPUSchedulingPolicy=0
CPUSchedulingPriority=0
TimerSlackNSec=50000
CPUSchedulingResetOnFork=no
NonBlocking=no
StandardInput=null
StandardInputData=
StandardOutput=journal
StandardError=inherit
TTYReset=no
TTYVHangup=no
TTYVTDisallocate=no
SyslogPriority=30
SyslogLevelPrefix=yes
SyslogLevel=6
SyslogFacility=3
LogLevelMax=-1
SecureBits=0
CapabilityBoundingSet=cap_chown cap_dac_override cap_dac_read_search cap_fowner cap_fsetid cap_kill cap_setgid cap_setuid cap_setpcap cap_linux_immutable cap_net_bind_service cap_net_broadcast cap_net_admin cap_net_raw cap_ipc_lock cap_ipc_owner cap_sys_module cap_sys_rawio cap_sys_chroot cap_sys_ptrace cap_sys_pacct cap_sys_admin cap_sys_boot cap_sys_nice cap_sys_resource cap_sys_time cap_sys_tty_config cap_mknod cap_lease cap_audit_write cap_audit_control cap_setfcap cap_mac_override cap_mac_admin cap_syslog cap_wake_alarm cap_block_suspend
AmbientCapabilities=
DynamicUser=no
RemoveIPC=no
MountFlags=
PrivateTmp=no
PrivateDevices=no
ProtectKernelTunables=no
ProtectKernelModules=no
ProtectControlGroups=no
PrivateNetwork=no
PrivateUsers=no
ProtectHome=no
ProtectSystem=no
SameProcessGroup=no
UtmpMode=init
IgnoreSIGPIPE=yes
NoNewPrivileges=no
SystemCallErrorNumber=0
LockPersonality=no
RuntimeDirectoryPreserve=no
RuntimeDirectoryMode=0755
StateDirectoryMode=0755
CacheDirectoryMode=0755
LogsDirectoryMode=0755
ConfigurationDirectoryMode=0755
MemoryDenyWriteExecute=no
RestrictRealtime=no
RestrictSUIDSGID=no
RestrictNamespaces=no
MountAPIVFS=no
KeyringMode=private
KillMode=control-group
KillSignal=15
SendSIGKILL=yes
SendSIGHUP=no
Id=kubelet.service
Names=kubelet.service
Requires=system.slice sysinit.target
WantedBy=multi-user.target
Conflicts=shutdown.target
Before=shutdown.target multi-user.target
After=systemd-journald.socket basic.target sysinit.target system.slice
Documentation=https://kubernetes.io/docs/home/
Description=kubelet: The Kubernetes Node Agent
LoadState=loaded
ActiveState=active
SubState=running
FragmentPath=/lib/systemd/system/kubelet.service
DropInPaths=/etc/systemd/system/kubelet.service.d/10-kubeadm.conf
UnitFileState=enabled
UnitFilePreset=enabled
StateChangeTimestamp=Fri 2020-03-13 08:38:14 KST
StateChangeTimestampMonotonic=15464627
InactiveExitTimestamp=Fri 2020-03-13 08:38:14 KST
InactiveExitTimestampMonotonic=15464627
ActiveEnterTimestamp=Fri 2020-03-13 08:38:14 KST
ActiveEnterTimestampMonotonic=15464627
ActiveExitTimestampMonotonic=0
InactiveEnterTimestampMonotonic=0
CanStart=yes
CanStop=yes
CanReload=no
CanIsolate=no
StopWhenUnneeded=no
RefuseManualStart=no
RefuseManualStop=no
AllowIsolate=no
DefaultDependencies=yes
OnFailureJobMode=replace
IgnoreOnIsolate=no
NeedDaemonReload=no
JobTimeoutUSec=infinity
JobRunningTimeoutUSec=infinity
JobTimeoutAction=none
ConditionResult=yes
AssertResult=yes
ConditionTimestamp=Fri 2020-03-13 08:38:14 KST
ConditionTimestampMonotonic=15463669
AssertTimestamp=Fri 2020-03-13 08:38:14 KST
AssertTimestampMonotonic=15463670
Transient=no
Perpetual=no
StartLimitIntervalUSec=0
StartLimitBurst=5
StartLimitAction=none
FailureAction=none
SuccessAction=none
InvocationID=758bc83cdc614cf9bfab58a0613145d6
CollectMode=inactive

Have crio

crio

Output of "crio --version":

crio version 1.17.0
commit: "76093256041842c5c8785dcab026f0eaa3220732-dirty"

Output of "systemctl show crio":

Type=notify
Restart=on-abnormal
NotifyAccess=main
RestartUSec=100ms
TimeoutStartUSec=infinity
TimeoutStopUSec=1min 30s
RuntimeMaxUSec=infinity
WatchdogUSec=0
WatchdogTimestamp=Fri 2020-03-13 08:38:21 KST
WatchdogTimestampMonotonic=21787470
PermissionsStartOnly=no
RootDirectoryStartOnly=no
RemainAfterExit=no
GuessMainPID=yes
MainPID=2758
ControlPID=0
FileDescriptorStoreMax=0
NFileDescriptorStore=0
StatusErrno=0
Result=success
UID=[not set]
GID=[not set]
NRestarts=0
ExecMainStartTimestamp=Fri 2020-03-13 08:38:20 KST
ExecMainStartTimestampMonotonic=21719973
ExecMainExitTimestampMonotonic=0
ExecMainPID=2758
ExecMainCode=0
ExecMainStatus=0
ExecStart={ path=/usr/bin/crio ; argv[]=/usr/bin/crio $CRIO_CONFIG_OPTIONS $CRIO_RUNTIME_OPTIONS $CRIO_STORAGE_OPTIONS $CRIO_NETWORK_OPTIONS $CRIO_METRICS_OPTIONS ; ignore_errors=no ; start_time=[Fri 2020-03-13 08:38:20 KST] ; stop_time=[n/a] ; pid=2758 ; code=(null) ; status=0/0 }
ExecReload={ path=/bin/kill ; argv[]=/bin/kill -s HUP $MAINPID ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }
Slice=system.slice
ControlGroup=/system.slice/crio.service
MemoryCurrent=[not set]
CPUUsageNSec=[not set]
TasksCurrent=22
IPIngressBytes=18446744073709551615
IPIngressPackets=18446744073709551615
IPEgressBytes=18446744073709551615
IPEgressPackets=18446744073709551615
Delegate=no
CPUAccounting=no
CPUWeight=[not set]
StartupCPUWeight=[not set]
CPUShares=[not set]
StartupCPUShares=[not set]
CPUQuotaPerSecUSec=infinity
IOAccounting=no
IOWeight=[not set]
StartupIOWeight=[not set]
BlockIOAccounting=no
BlockIOWeight=[not set]
StartupBlockIOWeight=[not set]
MemoryAccounting=no
MemoryLow=0
MemoryHigh=infinity
MemoryMax=infinity
MemorySwapMax=infinity
MemoryLimit=infinity
DevicePolicy=auto
TasksAccounting=yes
TasksMax=infinity
IPAccounting=no
Environment=GOTRACEBACK=crash
EnvironmentFile=/etc/default/crio (ignore_errors=yes)
UMask=0022
LimitCPU=infinity
LimitCPUSoft=infinity
LimitFSIZE=infinity
LimitFSIZESoft=infinity
LimitDATA=infinity
LimitDATASoft=infinity
LimitSTACK=infinity
LimitSTACKSoft=8388608
LimitCORE=infinity
LimitCORESoft=infinity
LimitRSS=infinity
LimitRSSSoft=infinity
LimitNOFILE=1048576
LimitNOFILESoft=1048576
LimitAS=infinity
LimitASSoft=infinity
LimitNPROC=1048576
LimitNPROCSoft=1048576
LimitMEMLOCK=16777216
LimitMEMLOCKSoft=16777216
LimitLOCKS=infinity
LimitLOCKSSoft=infinity
LimitSIGPENDING=63515
LimitSIGPENDINGSoft=63515
LimitMSGQUEUE=819200
LimitMSGQUEUESoft=819200
LimitNICE=0
LimitNICESoft=0
LimitRTPRIO=0
LimitRTPRIOSoft=0
LimitRTTIME=infinity
LimitRTTIMESoft=infinity
OOMScoreAdjust=-999
Nice=0
IOSchedulingClass=0
IOSchedulingPriority=0
CPUSchedulingPolicy=0
CPUSchedulingPriority=0
TimerSlackNSec=50000
CPUSchedulingResetOnFork=no
NonBlocking=no
StandardInput=null
StandardInputData=
StandardOutput=journal
StandardError=inherit
TTYReset=no
TTYVHangup=no
TTYVTDisallocate=no
SyslogPriority=30
SyslogLevelPrefix=yes
SyslogLevel=6
SyslogFacility=3
LogLevelMax=-1
SecureBits=0
CapabilityBoundingSet=cap_chown cap_dac_override cap_dac_read_search cap_fowner cap_fsetid cap_kill cap_setgid cap_setuid cap_setpcap cap_linux_immutable cap_net_bind_service cap_net_broadcast cap_net_admin cap_net_raw cap_ipc_lock cap_ipc_owner cap_sys_module cap_sys_rawio cap_sys_chroot cap_sys_ptrace cap_sys_pacct cap_sys_admin cap_sys_boot cap_sys_nice cap_sys_resource cap_sys_time cap_sys_tty_config cap_mknod cap_lease cap_audit_write cap_audit_control cap_setfcap cap_mac_override cap_mac_admin cap_syslog cap_wake_alarm cap_block_suspend
AmbientCapabilities=
DynamicUser=no
RemoveIPC=no
MountFlags=
PrivateTmp=no
PrivateDevices=no
ProtectKernelTunables=no
ProtectKernelModules=no
ProtectControlGroups=no
PrivateNetwork=no
PrivateUsers=no
ProtectHome=no
ProtectSystem=no
SameProcessGroup=no
UtmpMode=init
IgnoreSIGPIPE=yes
NoNewPrivileges=no
SystemCallErrorNumber=0
LockPersonality=no
RuntimeDirectoryPreserve=no
RuntimeDirectoryMode=0755
StateDirectoryMode=0755
CacheDirectoryMode=0755
LogsDirectoryMode=0755
ConfigurationDirectoryMode=0755
MemoryDenyWriteExecute=no
RestrictRealtime=no
RestrictSUIDSGID=no
RestrictNamespaces=no
MountAPIVFS=no
KeyringMode=private
KillMode=control-group
KillSignal=15
SendSIGKILL=yes
SendSIGHUP=no
Id=crio.service
Names=crio.service
Requires=sysinit.target crio-wipe.service system.slice
Wants=network-online.target
WantedBy=multi-user.target
Conflicts=shutdown.target
Before=shutdown.target multi-user.target
After=network-online.target crio-wipe.service system.slice sysinit.target basic.target systemd-journald.socket
Documentation=https://github.com/cri-o/cri-o
Description=Container Runtime Interface for OCI (CRI-O)
LoadState=loaded
ActiveState=active
SubState=running
FragmentPath=/usr/lib/systemd/system/crio.service
UnitFileState=enabled
UnitFilePreset=enabled
StateChangeTimestamp=Fri 2020-03-13 08:38:21 KST
StateChangeTimestampMonotonic=21787479
InactiveExitTimestamp=Fri 2020-03-13 08:38:20 KST
InactiveExitTimestampMonotonic=21719995
ActiveEnterTimestamp=Fri 2020-03-13 08:38:21 KST
ActiveEnterTimestampMonotonic=21787479
ActiveExitTimestampMonotonic=0
InactiveEnterTimestampMonotonic=0
CanStart=yes
CanStop=yes
CanReload=yes
CanIsolate=no
StopWhenUnneeded=no
RefuseManualStart=no
RefuseManualStop=no
AllowIsolate=no
DefaultDependencies=yes
OnFailureJobMode=replace
IgnoreOnIsolate=no
NeedDaemonReload=no
JobTimeoutUSec=infinity
JobRunningTimeoutUSec=infinity
JobTimeoutAction=none
ConditionResult=yes
AssertResult=yes
ConditionTimestamp=Fri 2020-03-13 08:38:20 KST
ConditionTimestampMonotonic=21719168
AssertTimestamp=Fri 2020-03-13 08:38:20 KST
AssertTimestampMonotonic=21719170
Transient=no
Perpetual=no
StartLimitIntervalUSec=10s
StartLimitBurst=5
StartLimitAction=none
FailureAction=none
SuccessAction=none
InvocationID=a621708fbf9749a4a2b41fc267b0fe1b
CollectMode=inactive

Output of "cat /etc/crio/crio.conf":

# The CRI-O configuration file specifies all of the available configuration
# options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
# daemon, but in a TOML format that can be more easily modified and versioned.
#
# Please refer to crio.conf(5) for details of all configuration options.

# CRI-O supports partial configuration reload during runtime, which can be
# done by sending SIGHUP to the running process. Currently supported options
# are explicitly mentioned with: 'This option supports live configuration
# reload'.

# CRI-O reads its storage defaults from the containers-storage.conf(5) file
# located at /etc/containers/storage.conf. Modify this storage configuration if
# you want to change the system's defaults. If you want to modify storage just
# for CRI-O, you can change the storage configuration options here.
[crio]

# Path to the "root directory". CRI-O stores all of its data, including
# containers images, in this directory.
#root = "/home/abuild/.local/share/containers/storage"

# Path to the "run directory". CRI-O stores all of its state in this directory.
#runroot = "/tmp/399/containers"

# Storage driver used to manage the storage of images and containers. Please
# refer to containers-storage.conf(5) to see all available storage drivers.
#storage_driver = "vfs"

# List to pass options to the storage driver. Please refer to
# containers-storage.conf(5) to see all available storage options.
#storage_option = [
#]

# The default log directory where all logs will go unless directly specified by
# the kubelet. The log directory specified must be an absolute directory.
log_dir = "/var/log/crio/pods"

# Location for CRI-O to lay down the version file
version_file = "/var/run/crio/version"

# The crio.api table contains settings for the kubelet/gRPC interface.
[crio.api]

# Path to AF_LOCAL socket on which CRI-O will listen.
listen = "/var/run/crio/crio.sock"

# IP address on which the stream server will listen.
stream_address = "127.0.0.1"

# The port on which the stream server will listen.
stream_port = "0"

# Enable encrypted TLS transport of the stream server.
stream_enable_tls = false

# Path to the x509 certificate file used to serve the encrypted stream. This
# file can change, and CRI-O will automatically pick up the changes within 5
# minutes.
stream_tls_cert = ""

# Path to the key file used to serve the encrypted stream. This file can
# change and CRI-O will automatically pick up the changes within 5 minutes.
stream_tls_key = ""

# Path to the x509 CA(s) file used to verify and authenticate client
# communication with the encrypted stream. This file can change and CRI-O will
# automatically pick up the changes within 5 minutes.
stream_tls_ca = ""

# Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
grpc_max_send_msg_size = 16777216

# Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
grpc_max_recv_msg_size = 16777216

# The crio.runtime table contains settings pertaining to the OCI runtime used
# and options for how to set up and manage the OCI runtime.
[crio.runtime]
# refer from https://github.com/kata-containers/runtime/issues/1912
manage_network_ns_lifecycle = true

# A list of ulimits to be set in containers by default, specified as
# "<ulimit name>=<soft limit>:<hard limit>", for example:
# "nofile=1024:2048"
# If nothing is set here, settings will be inherited from the CRI-O daemon
#default_ulimits = [
#]

# default_runtime is the _name_ of the OCI runtime to be used as the default.
# The name is matched against the runtimes map below.
default_runtime = "runc"

# If true, the runtime will not use pivot_root, but instead use MS_MOVE.
no_pivot = false

# decryption_keys_path is the path where the keys required for
# image decryption are stored.
decryption_keys_path = "/etc/crio/keys/"

# Path to the conmon binary, used for monitoring the OCI runtime.
# Will be searched for using $PATH if empty.
conmon = ""

# Cgroup setting for conmon
conmon_cgroup = "system.slice"

# Environment variable list for the conmon process, used for passing necessary
# environment variables to conmon or the runtime.
conmon_env = [
        "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
]

# If true, SELinux will be used for pod separation on the host.
selinux = false

# Path to the seccomp.json profile which is used as the default seccomp profile
# for the runtime. If not specified, then the internal default seccomp profile
# will be used.
seccomp_profile = ""

# Used to change the name of the default AppArmor profile of CRI-O. The default
# profile name is "crio-default-" followed by the version string of CRI-O.
apparmor_profile = "crio-default-1.17.0"

# Cgroup management implementation used for the runtime.
cgroup_manager = "cgroupfs"

# List of default capabilities for containers. If it is empty or commented out,
# only the capabilities defined in the containers json file by the user/kube
# will be added.
default_capabilities = [
        "CHOWN", 
        "DAC_OVERRIDE", 
        "FSETID", 
        "FOWNER", 
        "NET_RAW", 
        "SETGID", 
        "SETUID", 
        "SETPCAP", 
        "NET_BIND_SERVICE", 
        "SYS_CHROOT", 
        "KILL", 
]

# List of default sysctls. If it is empty or commented out, only the sysctls
# defined in the container json file by the user/kube will be added.
default_sysctls = [
]

# List of additional devices. specified as
# "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
#If it is empty or commented out, only the devices
# defined in the container json file by the user/kube will be added.
additional_devices = [
]

# Path to OCI hooks directories for automatically executed hooks.
hooks_dir = [
        "/usr/share/containers/oci/hooks.d", 
]

# List of default mounts for each container. **Deprecated:** this option will
# be removed in future versions in favor of default_mounts_file.
default_mounts = [
]

# Path to the file specifying the defaults mounts for each container. The
# format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
# its default mounts from the following two files:
#
#   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
#      override file, where users can either add in their own default mounts, or
#      override the default mounts shipped with the package.
#
#   2) /usr/share/containers/mounts.conf: This is the default file read for
#      mounts. If you want CRI-O to read from a different, specific mounts file,
#      you can change the default_mounts_file. Note, if this is done, CRI-O will
#      only add mounts it finds in this file.
#
#default_mounts_file = ""

# Maximum number of processes allowed in a container.
pids_limit = 1024

# Maximum sized allowed for the container log file. Negative numbers indicate
# that no size limit is imposed. If it is positive, it must be >= 8192 to
# match/exceed conmon's read buffer. The file is truncated and re-opened so the
# limit is never exceeded.
log_size_max = -1

# Whether container output should be logged to journald in addition to the kuberentes log file
log_to_journald = false

# Path to directory in which container exit files are written to by conmon.
container_exits_dir = "/var/run/crio/exits"

# Path to directory for container attach sockets.
container_attach_socket_dir = "/var/run/crio"

# The prefix to use for the source of the bind mounts.
bind_mount_prefix = ""

# If set to true, all containers will run in read-only mode.
read_only = false

# Changes the verbosity of the logs based on the level it is set to. Options
# are fatal, panic, error, warn, info, debug and trace. This option supports
# live configuration reload.
log_level = "info"

# Filter the log messages by the provided regular expression.
# This option supports live configuration reload.
log_filter = ""

# The UID mappings for the user namespace of each container. A range is
# specified in the form containerUID:HostUID:Size. Multiple ranges must be
# separated by comma.
uid_mappings = ""

# The GID mappings for the user namespace of each container. A range is
# specified in the form containerGID:HostGID:Size. Multiple ranges must be
# separated by comma.
gid_mappings = ""

# The minimal amount of time in seconds to wait before issuing a timeout
# regarding the proper termination of the container.
ctr_stop_timeout = 0

# **DEPRECATED** this option is being replaced by manage_ns_lifecycle, which is described below.
# manage_network_ns_lifecycle = false

# manage_ns_lifecycle determines whether we pin and remove namespaces
# and manage their lifecycle
manage_ns_lifecycle = false

# The directory where the state of the managed namespaces gets tracked.
# Only used when manage_ns_lifecycle is true.
namespaces_dir = "/var/run/crio/ns"

# pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
pinns_path = ""

# The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
# The runtime to use is picked based on the runtime_handler provided by the CRI.
# If no runtime_handler is provided, the runtime will be picked based on the level
# of trust of the workload. Each entry in the table should follow the format:
#
#[crio.runtime.runtimes.runtime-handler]
#  runtime_path = "/path/to/the/executable"
#  runtime_type = "oci"
#  runtime_root = "/path/to/the/root"
#
# Where:
# - runtime-handler: name used to identify the runtime
# - runtime_path (optional, string): absolute path to the runtime executable in
#   the host filesystem. If omitted, the runtime-handler identifier should match
#   the runtime executable name, and the runtime executable should be placed
#   in $PATH.
# - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
#   omitted, an "oci" runtime is assumed.
# - runtime_root (optional, string): root directory for storage of containers
#   state.


[crio.runtime.runtimes.runc]
runtime_path = ""
runtime_type = "oci"
runtime_root = "/run/runc"


# Kata Containers is an OCI runtime, where containers are run inside lightweight
# VMs. Kata provides additional isolation towards the host, minimizing the host attack
# surface and mitigating the consequences of containers breakout.

# Kata Containers with the default configured VMM
[crio.runtime.runtimes.kata]
runtime_path = "/usr/bin/kata-runtime"
runtime_type = "vm"

# Kata Containers with the QEMU VMM
#[crio.runtime.runtimes.kata-qemu]

# Kata Containers with the Firecracker VMM
#[crio.runtime.runtimes.kata-fc]

# Containers with Nvidia GPU
[crio.runtime.runtimes.nvidia]
runtime_path = "/usr/bin/nvidia-container-runtime"
runtime_type = "oci"


# The crio.image table contains settings pertaining to the management of OCI images.
#
# CRI-O reads its configured registries defaults from the system wide
# containers-registries.conf(5) located in /etc/containers/registries.conf. If
# you want to modify just CRI-O, you can change the registries configuration in
# this file. Otherwise, leave insecure_registries and registries commented out to
# use the system's defaults from /etc/containers/registries.conf.
[crio.image]

# Default transport for pulling images from a remote container storage.
default_transport = "docker://"

# The path to a file containing credentials necessary for pulling images from
# secure registries. The file is similar to that of /var/lib/kubelet/config.json
global_auth_file = ""

# The image used to instantiate infra containers.
# This option supports live configuration reload.
pause_image = "k8s.gcr.io/pause:3.1"

# The path to a file containing credentials specific for pulling the pause_image from
# above. The file is similar to that of /var/lib/kubelet/config.json
# This option supports live configuration reload.
pause_image_auth_file = ""

# The command to run to have a container stay in the paused state.
# When explicitly set to "", it will fallback to the entrypoint and command
# specified in the pause image. When commented out, it will fallback to the
# default: "/pause". This option supports live configuration reload.
pause_command = "/pause"

# Path to the file which decides what sort of policy we use when deciding
# whether or not to trust an image that we've pulled. It is not recommended that
# this option be used, as the default behavior of using the system-wide default
# policy (i.e., /etc/containers/policy.json) is most often preferred. Please
# refer to containers-policy.json(5) for more details.
signature_policy = ""

# List of registries to skip TLS verification for pulling images. Please
# consider configuring the registries via /etc/containers/registries.conf before
# changing them here.
#insecure_registries = "[]"

# Controls how image volumes are handled. The valid values are mkdir, bind and
# ignore; the latter will ignore volumes entirely.
image_volumes = "mkdir"

# List of registries to be used when pulling an unqualified image (e.g.,
# "alpine:latest"). By default, registries is set to "docker.io" for
# compatibility reasons. Depending on your workload and usecase you may add more
# registries (e.g., "quay.io", "registry.fedoraproject.org",
# "registry.opensuse.org", etc.).
registries = [
        "quay.io",
        "docker.io"
]


# The crio.network table containers settings pertaining to the management of
# CNI plugins.
[crio.network]

# Path to the directory where CNI configuration files are located.
network_dir = "/etc/cni/net.d/"

# Paths to directories where CNI plugin binaries are located.
plugin_dirs = [
        "/opt/cni/bin/",
]

# A necessary configuration for Prometheus based metrics retrieval
[crio.metrics]

# Globally enable or disable metrics support.
enable_metrics = false

# The port on which the metrics server will listen.
metrics_port = 9090

Have containerd

containerd

Output of "containerd --version":

containerd github.com/containerd/containerd v1.2.7 85f6aa58b8a3170aec9824568f7a31832878b603

Output of "systemctl show containerd":

Type=simple
Restart=no
NotifyAccess=none
RestartUSec=100ms
TimeoutStartUSec=1min 30s
TimeoutStopUSec=1min 30s
RuntimeMaxUSec=infinity
WatchdogUSec=0
WatchdogTimestamp=Fri 2020-03-13 08:38:14 KST
WatchdogTimestampMonotonic=15621765
PermissionsStartOnly=no
RootDirectoryStartOnly=no
RemainAfterExit=no
GuessMainPID=yes
MainPID=1584
ControlPID=0
FileDescriptorStoreMax=0
NFileDescriptorStore=0
StatusErrno=0
Result=success
UID=[not set]
GID=[not set]
NRestarts=0
ExecMainStartTimestamp=Fri 2020-03-13 08:38:14 KST
ExecMainStartTimestampMonotonic=15621739
ExecMainExitTimestampMonotonic=0
ExecMainPID=1584
ExecMainCode=0
ExecMainStatus=0
ExecStartPre={ path=/sbin/modprobe ; argv[]=/sbin/modprobe overlay ; ignore_errors=yes ; start_time=[Fri 2020-03-13 08:38:14 KST] ; stop_time=[Fri 2020-03-13 08:38:14 KST] ; pid=1582 ; code=exited ; status=0 }
ExecStart={ path=/usr/bin/containerd ; argv[]=/usr/bin/containerd ; ignore_errors=no ; start_time=[Fri 2020-03-13 08:38:14 KST] ; stop_time=[n/a] ; pid=1584 ; code=(null) ; status=0/0 }
Slice=system.slice
ControlGroup=/system.slice/containerd.service
MemoryCurrent=[not set]
CPUUsageNSec=[not set]
TasksCurrent=117
IPIngressBytes=18446744073709551615
IPIngressPackets=18446744073709551615
IPEgressBytes=18446744073709551615
IPEgressPackets=18446744073709551615
Delegate=yes
DelegateControllers=cpu cpuacct io blkio memory devices pids
CPUAccounting=no
CPUWeight=[not set]
StartupCPUWeight=[not set]
CPUShares=[not set]
StartupCPUShares=[not set]
CPUQuotaPerSecUSec=infinity
IOAccounting=no
IOWeight=[not set]
StartupIOWeight=[not set]
BlockIOAccounting=no
BlockIOWeight=[not set]
StartupBlockIOWeight=[not set]
MemoryAccounting=no
MemoryLow=0
MemoryHigh=infinity
MemoryMax=infinity
MemorySwapMax=infinity
MemoryLimit=infinity
DevicePolicy=auto
TasksAccounting=yes
TasksMax=infinity
IPAccounting=no
UMask=0022
LimitCPU=infinity
LimitCPUSoft=infinity
LimitFSIZE=infinity
LimitFSIZESoft=infinity
LimitDATA=infinity
LimitDATASoft=infinity
LimitSTACK=infinity
LimitSTACKSoft=8388608
LimitCORE=infinity
LimitCORESoft=infinity
LimitRSS=infinity
LimitRSSSoft=infinity
LimitNOFILE=1048576
LimitNOFILESoft=1048576
LimitAS=infinity
LimitASSoft=infinity
LimitNPROC=infinity
LimitNPROCSoft=infinity
LimitMEMLOCK=16777216
LimitMEMLOCKSoft=16777216
LimitLOCKS=infinity
LimitLOCKSSoft=infinity
LimitSIGPENDING=63515
LimitSIGPENDINGSoft=63515
LimitMSGQUEUE=819200
LimitMSGQUEUESoft=819200
LimitNICE=0
LimitNICESoft=0
LimitRTPRIO=0
LimitRTPRIOSoft=0
LimitRTTIME=infinity
LimitRTTIMESoft=infinity
OOMScoreAdjust=0
Nice=0
IOSchedulingClass=0
IOSchedulingPriority=0
CPUSchedulingPolicy=0
CPUSchedulingPriority=0
TimerSlackNSec=50000
CPUSchedulingResetOnFork=no
NonBlocking=no
StandardInput=null
StandardInputData=
StandardOutput=journal
StandardError=inherit
TTYReset=no
TTYVHangup=no
TTYVTDisallocate=no
SyslogPriority=30
SyslogLevelPrefix=yes
SyslogLevel=6
SyslogFacility=3
LogLevelMax=-1
SecureBits=0
CapabilityBoundingSet=cap_chown cap_dac_override cap_dac_read_search cap_fowner cap_fsetid cap_kill cap_setgid cap_setuid cap_setpcap cap_linux_immutable cap_net_bind_service cap_net_broadcast cap_net_admin cap_net_raw cap_ipc_lock cap_ipc_owner cap_sys_module cap_sys_rawio cap_sys_chroot cap_sys_ptrace cap_sys_pacct cap_sys_admin cap_sys_boot cap_sys_nice cap_sys_resource cap_sys_time cap_sys_tty_config cap_mknod cap_lease cap_audit_write cap_audit_control cap_setfcap cap_mac_override cap_mac_admin cap_syslog cap_wake_alarm cap_block_suspend
AmbientCapabilities=
DynamicUser=no
RemoveIPC=no
MountFlags=
PrivateTmp=no
PrivateDevices=no
ProtectKernelTunables=no
ProtectKernelModules=no
ProtectControlGroups=no
PrivateNetwork=no
PrivateUsers=no
ProtectHome=no
ProtectSystem=no
SameProcessGroup=no
UtmpMode=init
IgnoreSIGPIPE=yes
NoNewPrivileges=no
SystemCallErrorNumber=0
LockPersonality=no
RuntimeDirectoryPreserve=no
RuntimeDirectoryMode=0755
StateDirectoryMode=0755
CacheDirectoryMode=0755
LogsDirectoryMode=0755
ConfigurationDirectoryMode=0755
MemoryDenyWriteExecute=no
RestrictRealtime=no
RestrictSUIDSGID=no
RestrictNamespaces=no
MountAPIVFS=no
KeyringMode=private
KillMode=process
KillSignal=15
SendSIGKILL=yes
SendSIGHUP=no
Id=containerd.service
Names=containerd.service
Requires=system.slice sysinit.target
WantedBy=multi-user.target
BoundBy=docker.service
Conflicts=shutdown.target
Before=multi-user.target shutdown.target docker.service
After=basic.target network.target sysinit.target system.slice systemd-journald.socket
Documentation=https://containerd.io
Description=containerd container runtime
LoadState=loaded
ActiveState=active
SubState=running
FragmentPath=/lib/systemd/system/containerd.service
UnitFileState=enabled
UnitFilePreset=enabled
StateChangeTimestamp=Fri 2020-03-13 08:38:14 KST
StateChangeTimestampMonotonic=15621766
InactiveExitTimestamp=Fri 2020-03-13 08:38:14 KST
InactiveExitTimestampMonotonic=15618935
ActiveEnterTimestamp=Fri 2020-03-13 08:38:14 KST
ActiveEnterTimestampMonotonic=15621766
ActiveExitTimestampMonotonic=0
InactiveEnterTimestampMonotonic=0
CanStart=yes
CanStop=yes
CanReload=no
CanIsolate=no
StopWhenUnneeded=no
RefuseManualStart=no
RefuseManualStop=no
AllowIsolate=no
DefaultDependencies=yes
OnFailureJobMode=replace
IgnoreOnIsolate=no
NeedDaemonReload=no
JobTimeoutUSec=infinity
JobRunningTimeoutUSec=infinity
JobTimeoutAction=none
ConditionResult=yes
AssertResult=yes
ConditionTimestamp=Fri 2020-03-13 08:38:14 KST
ConditionTimestampMonotonic=15618326
AssertTimestamp=Fri 2020-03-13 08:38:14 KST
AssertTimestampMonotonic=15618326
Transient=no
Perpetual=no
StartLimitIntervalUSec=10s
StartLimitBurst=5
StartLimitAction=none
FailureAction=none
SuccessAction=none
InvocationID=1bdf023fb035403db77f4511475e5572
CollectMode=inactive

Output of "cat /etc/containerd/config.toml":

root = "/var/lib/containerd"
state = "/run/containerd"
oom_score = 0

[grpc]
  address = "/run/containerd/containerd.sock"
  uid = 0
  gid = 0
  max_recv_message_size = 16777216
  max_send_message_size = 16777216

[debug]
  address = ""
  uid = 0
  gid = 0
  level = ""

[metrics]
  address = ""
  grpc_histogram = false

[cgroup]
  path = ""

[plugins]
  [plugins.cgroups]
    no_prometheus = false
  [plugins.cri]
    stream_server_address = "127.0.0.1"
    stream_server_port = "0"
    enable_selinux = false
    sandbox_image = "k8s.gcr.io/pause:3.1"
    stats_collect_period = 10
    systemd_cgroup = false
    enable_tls_streaming = false
    max_container_log_line_size = 16384
    [plugins.cri.containerd]
      snapshotter = "overlayfs"
      no_pivot = false

      [plugins.cri.containerd.runtimes.runc]
         runtime_type = "io.containerd.runc.v1"
         [plugins.cri.containerd.runtimes.runc.options]
           NoPivotRoot = false
           NoNewKeyring = false
           ShimCgroup = ""
           IoUid = 0
           IoGid = 0
           BinaryName = "runc"
           Root = ""
           CriuPath = ""
           SystemdCgroup = false

      [plugins.cri.containerd.runtimes.kata]
         runtime_type = "io.containerd.runc.v1"
         [plugins.cri.containerd.runtimes.kata.options]
           NoPivotRoot = false
           NoNewKeyring = false
           ShimCgroup = ""
           IoUid = 0
           IoGid = 0
           BinaryName = "/usr/bin/kata-runtime"
           Root = ""
           CriuPath = ""
           SystemdCgroup = false

      [plugins.cri.containerd.default_runtime]
        runtime_type = "io.containerd.runtime.v1.linux"
        runtime_engine = ""
        runtime_root = ""
      [plugins.cri.containerd.untrusted_workload_runtime]
        runtime_type = ""
        runtime_engine = ""
        runtime_root = ""
    [plugins.cri.cni]
      bin_dir = "/opt/cni/bin"
      conf_dir = "/etc/cni/net.d"
      conf_template = ""
    [plugins.cri.registry]
      [plugins.cri.registry.mirrors]
        [plugins.cri.registry.mirrors."docker.io"]
          endpoint = ["https://registry-1.docker.io"]
    [plugins.cri.x509_key_pair_streaming]
      tls_cert_file = ""
      tls_key_file = ""
  [plugins.diff-service]
    default = ["walking"]
  [plugins.linux]
    shim = "containerd-shim"
    runtime = "runc"
    runtime_root = ""
    no_shim = false
    shim_debug = false
  [plugins.opt]
    path = "/opt/containerd"
  [plugins.restart]
    interval = "10s"
  [plugins.scheduler]
    pause_threshold = 0.02
    deletion_threshold = 0
    mutation_threshold = 100
    schedule_delay = "0s"
    startup_delay = "100ms"

Packages

Have dpkg
Output of "dpkg -l|egrep "(cc-oci-runtimecc-runtimerunv|kata-proxy|kata-runtime|kata-shim|kata-ksm-throttler|kata-containers-image|linux-container|qemu-)"":

ii  ipxe-qemu-256k-compat-efi-roms             1.0.0+git-20150424.a25a16d-0ubuntu2              all          PXE boot firmware - Compat EFI ROM images for qemu
ii  kata-containers-image                      1.11.0~alpha0-42                                 amd64        Kata containers image
ii  kata-ksm-throttler                         1.11.0~alpha0-45                                 amd64        
ii  kata-linux-container                       5.4.15.66-45                                     amd64        linux kernel optimised for container-like workloads.
ii  kata-proxy                                 1.11.0~alpha0-43                                 amd64        
ii  kata-runtime                               1.11.0~alpha0-51                                 amd64        
ii  kata-shim                                  1.11.0~alpha0-41                                 amd64        
ii  qemu-arm-static                            1.6.0rc3-tizen20161231                           amd64        QEMU is an extremely well-performing CPU emulator that allows you to choose between simulating an entire system and running userspace binaries for different architectures under your native operating system. It currently emulates x86, ARM, PowerPC and SPARC CPUs as well as PC and PowerMac systems.
ii  qemu-block-extra:amd64                     1:2.11+dfsg-1ubuntu7.23                          amd64        extra block backend modules for qemu-system and qemu-utils
ii  qemu-kvm                                   1:2.11+dfsg-1ubuntu7.23                          amd64        QEMU Full virtualization on x86 hardware
ii  qemu-slof                                  20170724+dfsg-1ubuntu1                           all          Slimline Open Firmware -- QEMU PowerPC version
ii  qemu-system                                1:2.11+dfsg-1ubuntu7.23                          amd64        QEMU full system emulation binaries
ii  qemu-system-arm                            1:2.11+dfsg-1ubuntu7.23                          amd64        QEMU full system emulation binaries (arm)
ii  qemu-system-common                         1:2.11+dfsg-1ubuntu7.23                          amd64        QEMU full system emulation binaries (common files)
ii  qemu-system-mips                           1:2.11+dfsg-1ubuntu7.23                          amd64        QEMU full system emulation binaries (mips)
ii  qemu-system-misc                           1:2.11+dfsg-1ubuntu7.23                          amd64        QEMU full system emulation binaries (miscellaneous)
ii  qemu-system-ppc                            1:2.11+dfsg-1ubuntu7.23                          amd64        QEMU full system emulation binaries (ppc)
ii  qemu-system-s390x                          1:2.11+dfsg-1ubuntu7.23                          amd64        QEMU full system emulation binaries (s390x)
ii  qemu-system-sparc                          1:2.11+dfsg-1ubuntu7.23                          amd64        QEMU full system emulation binaries (sparc)
ii  qemu-system-x86                            1:2.11+dfsg-1ubuntu7.23                          amd64        QEMU full system emulation binaries (x86)
ii  qemu-user                                  1:2.11+dfsg-1ubuntu7.23                          amd64        QEMU user mode emulation binaries
ii  qemu-user-binfmt                           1:2.11+dfsg-1ubuntu7.23                          amd64        QEMU user mode binfmt registration for qemu-user
ii  qemu-utils                                 1:2.11+dfsg-1ubuntu7.23                          amd64        QEMU utilities
ii  qemu-vanilla                               4.1.1+git.99c5874a9b-46                          amd64        linux kernel optimised for container-like workloads.

Have rpm
Output of "rpm -qa|egrep "(cc-oci-runtimecc-runtimerunv|kata-proxy|kata-runtime|kata-shim|kata-ksm-throttler|kata-containers-image|linux-container|qemu-)"":



@amshinde
Copy link
Copy Markdown
Member

@wansuyoo I think the pcie hotplug driver pciehp may be missing on your system. Can you check for CONFIG_HOTPLUG_PCI_PCIE ?

@wansuyoo
Copy link
Copy Markdown

@amshinde. Below is from my system.

$ uname -a
Linux wansu-ubuntu 5.3.0-40-generic #32~18.04.1-Ubuntu SMP Mon Feb 3 14:05:59 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux

$ cat /boot/config-5.3.0-40-generic | grep CONFIG_HOTPLUG_PCI_PCIE 
CONFIG_HOTPLUG_PCI_PCIE=y

@amshinde
Copy link
Copy Markdown
Member

@wansuyoo If the directory /sys/bus/pci/slots/ is empty on the host, then the pci hotplug driver is not loaded or the system may not contain hot-plug PCI slots.
cc @grahamwhaley @devimc @Jimmy-Xu for your thoughts on this.

@wansuyoo We now have a wip PR for hotplugging Nvidia GPUs for your reference: kata-containers/documentation#615

@wansuyoo
Copy link
Copy Markdown

@amshinde @Jimmy-Xu

Thank you for your kindly documentation for using Nvidia GPU with Kata-containers.
Also, I'm sorry to keep you inquiring about the completed PR.
I followed the guide at the documentation, and I'm successfully able to pass one of the GPU to the guest VM through vfio-pci.
It show up with lspci on container.

00:08.0 VGA compatible controller [0300]: NVIDIA Corporation GP108 [GeForce GT 1030] [10de:1d01] (rev a1) (prog-if 00 [VGA controller])
	Subsystem: NVIDIA Corporation GP108 [GeForce GT 1030] [10de:11c7]
	Physical Slot: 8
	Flags: bus master, fast devsel, latency 0, IRQ 10
	Memory at c0000000 (32-bit, non-prefetchable) [size=16M]
	Memory at 1ac0000000 (64-bit, prefetchable) [size=256M]
	Memory at 1ad0000000 (64-bit, prefetchable) [size=32M]
	I/O ports at 1000 [size=128]
	Capabilities: [60] Power Management version 3
	Capabilities: [68] MSI: Enable- Count=1/1 Maskable- 64bit+
	Capabilities: [78] Express Legacy Endpoint, MSI 00
	Kernel driver in use: nvidia

But, I have another problem.
When I check Nvidia device status with nvidia-smi, it is failed like as below.
And, the nvidia gpu information has an errors.

# nvidia-smi 
Unable to determine the device handle for GPU 0000:00:08.0: Unknown Error

# cat /proc/driver/nvidia/gpus/0000\:00\:08.0/information 
Model: 		 Unknown
IRQ:   		 35
GPU UUID: 	 GPU-????????-????-????-????-????????????
Video BIOS: 	 ??.??.??.??.??
Bus Type: 	 PCIe
DMA Size: 	 36 bits
DMA Mask: 	 0xfffffffff
Bus Location: 	 0000:00:08.0
Device Minor: 	 0
Blacklisted:	 No

Is there anything I missed that I should do?

@wansuyoo
Copy link
Copy Markdown

@amshinde @Jimmy-Xu
There was reported same issue at Nvidia community.
https://forums.developer.nvidia.com/t/nvidia-smi-reports-unable-to-determine-the-device-handle-for-gpu/46835
And, here's the guide.

This is a VMWare pass-through (directpath) problem.
You must add to your vmx file the following directive:

hypervisor.cpuid.v0 = FALSE

Please guide me how to apply this in Kata-containers.

@amshinde
Copy link
Copy Markdown
Member

@wansuyoo I dont have a nvidia device to try this out unfortunately.
I would check if the nvidia driver has been loaded first.
Looking at similar issues posted on forums, I see this could be solved with passing -cpu host,kvm=off to qemu.

@Jimmy-Xu Can you help out @wansuyoo with this?

@gnawux
Copy link
Copy Markdown
Member

gnawux commented Mar 20, 2020

@wansuyoo here documented how it works: kata-containers/documentation#615

this doc PR has been approved and will be merged once the CI passed

@gnawux
Copy link
Copy Markdown
Member

gnawux commented Mar 20, 2020

@Jimmy-Xu
Copy link
Copy Markdown
Contributor Author

@amshinde I think you are right.
The Nvidia driver for GeForce GT 1030 throws Error 43 , if it recognizes the GPU is being passed through to a virtual machine.
Set KVM hidden is the solution.

@wansuyoo
FYI: https://mathiashueber.com/fighting-error-43-nvidia-gpu-virtual-machine/#error-43-vm-config

I think we can try to add kvm=off in this function:

func (q *qemuAmd64) cpuModel() string {

@Jimmy-Xu
Copy link
Copy Markdown
Contributor Author

Jimmy-Xu commented Mar 21, 2020

@wansuyoo
I created a PR for you
#2547

Can you give it a try with your Nvidia graphic card?

BTW, I noticed that your device is not a large BAR device, so q35 is not required. you can use it:

machine_type = "pc"

hotplug_vfio_on_root_bus = true

@wansuyoo
Copy link
Copy Markdown

@Jimmy-Xu @amshinde

Yes, Nvidia GeForce GT 1030 card is not a large BAR device.

	Subsystem: NVIDIA Corporation GP108 [GeForce GT 1030] [10de:11c7]
	Flags: fast devsel, IRQ 120, NUMA node 0
	Memory at df000000 (32-bit, non-prefetchable) [size=16M]
	Memory at 3800a0000000 (64-bit, prefetchable) [size=256M]
	Memory at 3800b0000000 (64-bit, prefetchable) [size=32M]

So, I have configured like as you mentioned above.

machine_type = "pc"
hotplug_vfio_on_root_bus = true

Also, for set KVM hidden option at qemu of kata, I tested with runtime added #2547 PR.
But, it was not set when launching vm from qemu.
Because in my env, the q.nestedRun of is false.

if q.nestedRun {

Is there additional configure to enable this nestedRun option?

@Jimmy-Xu
Copy link
Copy Markdown
Contributor Author

@wansuyoo I had updated the PR, nestedRun is not required. Please try again.

@wansuyoo
Copy link
Copy Markdown

@Jimmy-Xu
Yes. I have already modified and tested it like that. And I could see it was working in kata container.
Thanks a lot.

root@8133ce2719ed:~# nvidia-smi
Mon Mar 23 16:21:02 2020      
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 440.64       Driver Version: 440.64       CUDA Version: 10.2     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  GeForce GT 1030     Off  | 00000000:00:08.0 Off |                  N/A |
| 38%   49C    P0    N/A /  30W |      0MiB /  2001MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+
|   1  GeForce GT 1030     Off  | 00000000:00:0A.0 Off |                  N/A |
| 35%   46C    P0    N/A /  30W |      0MiB /  2001MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+
                                                                                
+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID   Type   Process name                             Usage      |
|=============================================================================|
|  No running processes found                                                 |
+-----------------------------------------------------------------------------+

@Jimmy-Xu
Copy link
Copy Markdown
Contributor Author

@wansuyoo
The VFIO device in the Kata container does not support -device vfio-pci,host=xx:xx.x.

Because Kata runtime will parse the devices info in iommu group for VFIO devices.

FYI:

// Pass all devices in iommu group

@wansuyoo
Copy link
Copy Markdown

wansuyoo commented Mar 24, 2020

@Jimmy-Xu
Sorry, I have deleted previous question. It was my mistake.
Through docker --device option with iommu group id, I could assign one of the pci devices, even though same deviceIDs.

$ docker run -it -d --runtime=kata-runtime --device /dev/vfio/39 ubuntu

But, for installing nvidia module in container, I have to set privileged option when create container.
At this case, all of vfio-pci devices are assigned to this container.
So, I was queried that is there a way to assign vfio-pci device as pci address.

$ docker run --privileged -it -d --runtime=kata-runtime --device /dev/vfio/39 ubuntu /bin/bash
root@4c59921e9a79:/# lspci
...
00:08.0 VGA compatible controller: NVIDIA Corporation GP108 [GeForce GT 1030] (rev a1)
00:09.0 Audio device: NVIDIA Corporation GP108 High Definition Audio Controller (rev a1)
00:0a.0 VGA compatible controller: NVIDIA Corporation GP108 [GeForce GT 1030] (rev a1)
00:0b.0 Audio device: NVIDIA Corporation GP108 High Definition Audio Controller (rev a1)

@wansuyoo
Copy link
Copy Markdown

@Jimmy-Xu
I keep asking questions.
Is it possible to install nvidia driver in the kata-vm image not a container within kata-vm?
I think it would be helpful if we can prepare vm images having nvidia driver when deploy kata-containers.

@amshinde
Copy link
Copy Markdown
Member

amshinde commented Mar 25, 2020

@wansuyoo You are seeing all vfio devices being passed to the container, as in privileged case all host devices are passed to the container. We have addressed this behaviour in CRI runtimes(containerd and crio) to not pass host devices to the container in case of privileged, but a similar PR to do this is still pending in docker: moby/moby#39702

Meanwhile, can you try running the container with "--cap-add all" as :

docker run -it  --runtime=kata-runtime --cap-add all --device /dev/vfio/39 ubuntu bash

to install the nvidia driver. I suspect you would need just elevated capabilities and not a full blown privielged container. Let me know if that works for you.

Also, as the documentation provided by @Jimmy-Xu says, you can build a guest kernel with all the Nvidia drivers. Any reason why that doesnt work for you?

@wansuyoo
Copy link
Copy Markdown

wansuyoo commented Mar 25, 2020

@amshinde

. I suspect you would need just elevated capabilities and not a full blown privielged container. Let me know if that works for you.

Yes, it is working when adding capabilities to container with "--cap-add=ALL" option.
I can create 2 containers having each PCIe device of node.

Also, as the documentation provided by @Jimmy-Xu says, you can build a guest kernel with all the Nvidia drivers. Any reason why that doesnt work for you?

You are right. I can install Nvidia driver on container created within guest(os and kernel).
But, my question was that it is possible to build guest(os and kernel) with installed nvidia driver from osbuilder and packaging/kernel.

@Jimmy-Xu
Copy link
Copy Markdown
Contributor Author

@wansuyoo

my question was that it is possible to build guest(os and kernel) with installed nvidia driver from osbuilder and packaging/kernel.

Do you want to load the Nvidia driver when starting the Kata container?

I think when building the guest rootfs, the Nvidia driver(*.ko) files should be copied into the $ROOT_FS/lib/modules/$guest_kernel_version/kernel/drivers/video/ directory.

Then you need to run depmod, which is used to generate module dependency data for the guest kernel.

//maybe
$ chroot $ROOT_FS depmod

ROOT_FS=$GOPATH/src/github.com/kata-containers/osbuilder/rootfs-builder/rootfs

To load Nvidia driver when starting Kata container:
FYI:

Other reference:

I hope it is helpful for you.

@devimc
Copy link
Copy Markdown

devimc commented Mar 25, 2020

@wansuyoo let us know if you cannot load the kernel modules using annotations, maybe you will see this issue kata-containers/agent#709

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

7 participants