Skip to content

grpc Deadline Exceeded #117

@sergi-diaz-ctrl4enviro

Description

@sergi-diaz-ctrl4enviro

Please do not disclose security vulnerabilities as issues. See our security policy for responsible disclosures.

Describe the bug

Explain the behavior you would expect and the actual behavior.

I deployed an application using Computer Vision SDK using Docker.
The application runs correctly. However, after approximately 3 weeks of uptime the inference server stops responding to the petitions.
This is the error message I get:

<_InactiveRpcError of RPC that terminated with:
	status = StatusCode.DEADLINE_EXCEEDED
	details = "Deadline Exceeded"
	debug_error_string = "UNKNOWN:Error received from peer  {grpc_message:"Deadline Exceeded", grpc_status:4, created_time:"2025-11-10T07:30:09.719932506+00:00"}"

I cannot find anything related in inference server container logs.
This issue is solved temporally by recreating the application and inference server containers (docker compose down and then up).
Is there a way to debug this further? Has anyone encountered the same problem? Any help is appreciated.

Environment

  • Axis device model: AXIS P4707-PLVE Panoramic Camera
  • Axis device firmware version: 11.11.141
  • Stack trace or logs: [e.g. Axis device system logs]
  • OS and version: [e.g. macOS v12.2.1, Ubuntu 20.04 LTS, Windows build 21996.1]
  • Version: [version of the application, SDK, runtime environment, package manager, interpreter, compiler, depending on what seems relevant]

Additional context

docker compose:

version: '3.3'
services:
  vision:
    image: vision:artpec8
    restart: unless-stopped
    network_mode: host
    logging:
      driver: "json-file"
      options:
        max-file: "5"
        max-size: "100k"
    environment:
      - PYTHONUNBUFFERED=1
      - INFERENCE_HOST=unix:///tmp/acap-runtime.sock
      - INFERENCE_PORT=0
    volumes:
      - inference-server:/tmp
  inference-server:
    image: axisecp/acap-runtime:2.0.1-aarch64-containerized
    restart: unless-stopped
    logging:
      driver: "json-file"
      options:
        max-file: "5"
        max-size: "100k"
    environment:
      - INFERENCE_CHIP=12
      - MODEL_PATH=/app/weights/model.tflite
    entrypoint: ["/opt/app/acap_runtime/acapruntime", "-m", "/app/weights/model.tflite", "-j", "12"] # PUT -j 2 FOR CPU AND -j 12 FOR DLPU
    volumes:
      - /run/dbus/system_bus_socket:/run/dbus/system_bus_socket
      - /run/parhand/parhandsocket:/run/parhand/parhandsocket
      - /usr/lib:/usr/lib
      - /usr/bin:/usr/bin
      - inference-server:/tmp
      - ./weights:/app/weights
volumes:
  acap_dl-models: {}
  inference-server: {}

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions