Contributing guidelines and issue reporting guide
Well-formed report checklist
Description of bug
Bug description
We run docker buildx in our CICD pipelines using Github Actions. We've used the same Github Workflows for several years now, with the default caching behavior set to inline caching and storing our images in Amazon ECR. Recently, we upgraded our docker buildx version and started see errors like:
Error: buildx failed with: ERROR: failed to build: failed to solve: dependency sha256:e2aa7bbca7e0480ed9d31d7951c3300a24651faff1d5bc9718905c46ea394f40 is not part of the same cache chain
I couldn't find any documentation about this error. It looks like this error message was introduced recently, but I don't know what it means. For some context, we have at least two --cache-from statements. The first cache-from points to a tag from the current branch and the second cache-from is the tag from the main branch. Ideally on the first commit, they will have a cache miss from their branch and pick up the cache from the main branch. On subsequent commits, they will pick up the cache from their branch.
Since this issue, we have changed from inline caching to registry caching and haven't seen the error return. I appreciate any help understanding the error message and correcting whats wrong with our caching setup.
Reproduction
I am having a hard time providing instructions for reproducing the issue. Here are the flags that were used in the failing CI job:
--build-arg IMAGE_TAG=<GIT_SHORT_SHA> \
--build-arg BUILDKIT_INLINE_CACHE=1 \
--cache-from type=registry,ref=<ECR>:branch-<FEATURE_BRANCH> \
--cache-from type=registry,ref=<ECR>:branch-<MAIN> \
--cache-to type=inline,mode=max \
--file Dockerfile \
--iidfile /home/runner/_work/_temp/docker-actions-toolkit-ohXfcK/build-iidfile-a8dbc73217.txt \
--label org.opencontainers.image.created=2025-09-23T23:52:49.106Z \
--label org.opencontainers.image.description=<DESC> \
--label org.opencontainers.image.licenses=MIT \
--label org.opencontainers.image.revision=<GIT_REV> \
--label org.opencontainers.image.source=<GIT_REPO> \
--label org.opencontainers.image.title=<GIT_REPO_NAME> \
--label org.opencontainers.image.url=<GIT_REPO> \
--label org.opencontainers.image.version=<GIT_BRANCH> \
--platform linux/amd64 \
--tag <ECR>:branch-<FEATURE_BRANCH> \
--tag <ECR>:<GIT_SHORT_SHA> \
--tag <ECR>:<GIT_LONG_SHA> \
--load \
--metadata-file /home/runner/_work/_temp/docker-actions-toolkit-ohXfcK/build-metadata-dac4a221c4.json \
--push .
The issue seemed to happen more frequently on multi-stage builds where only the last cache layer is stored. Users reported that after re-running the job several times, sometimes the error would go away and they were able to complete the job.
Version information
/usr/bin/docker version
Client: Docker Engine - Community
Version: 28.4.0
API version: 1.51
Go version: go1.24.7
Git commit: d8eb465
Built: Wed Sep 3 20:57:05 2025
OS/Arch: linux/amd64
Context: default
Server: Docker Engine - Community
Engine:
Version: 28.4.0
API version: 1.51 (minimum version 1.24)
Go version: go1.24.7
Git commit: 249d679
Built: Wed Sep 3 20:58:50 2025
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: v1.7.28
GitCommit: b98a3aace656320842a23f4a392a33f46af97866
runc:
Version: 1.3.0
GitCommit: v1.3.0-0-g4ca628d
docker-init:
Version: 0.19.0
GitCommit: de40ad0
/usr/bin/docker info
Client: Docker Engine - Community
Version: 28.4.0
Context: default
Debug Mode: false
Plugins:
buildx: Docker Buildx (Docker Inc.)
Version: v0.26.1
Path: /usr/local/lib/docker/cli-plugins/docker-buildx
compose: Docker Compose (Docker Inc.)
Version: v2.39.4
Path: /usr/libexec/docker/cli-plugins/docker-compose
Server:
Containers: 1
Running: 1
Paused: 0
Stopped: 0
Images: 3
Server Version: 28.4.0
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Using metacopy: false
Native Overlay Diff: true
userxattr: false
Logging Driver: json-file
Cgroup Driver: cgroupfs
Cgroup Version: 2
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local splunk syslog
CDI spec directories:
/etc/cdi
/var/run/cdi
Swarm: inactive
Runtimes: runc io.containerd.runc.v2
Default Runtime: runc
Init Binary: docker-init
containerd version: b98a3aace656320842a23f4a392a33f46af97866
runc version: v1.3.0-0-g4ca628d
init version: de40ad0
Security Options:
seccomp
Profile: builtin
cgroupns
Kernel Version: 6.1.148-173.267.amzn2023.x86_64
Operating System: Alpine Linux v3.22 (containerized)
OSType: linux
Architecture: x86_64
CPUs: 48
Total Memory: 371.7GiB
Name: amd64-privileged-6jgxk-runner-62h75
ID: 02ced34e-dbee-4110-ab45-a455145b13ad
Docker Root Dir: /home/runner/_work/docker
Debug Mode: false
Experimental: false
Insecure Registries:
::1/128
127.0.0.0/8
Live Restore Enabled: false
Product License: Community Engine
/usr/bin/docker buildx version
github.com/docker/buildx v0.26.1 1a8287f22cf5a38339a4c1bf432b803c5f8b2aae
Contributing guidelines and issue reporting guide
Well-formed report checklist
Description of bug
Bug description
We run docker buildx in our CICD pipelines using Github Actions. We've used the same Github Workflows for several years now, with the default caching behavior set to inline caching and storing our images in Amazon ECR. Recently, we upgraded our docker buildx version and started see errors like:
I couldn't find any documentation about this error. It looks like this error message was introduced recently, but I don't know what it means. For some context, we have at least two
--cache-fromstatements. The first cache-from points to a tag from the current branch and the second cache-from is the tag from the main branch. Ideally on the first commit, they will have a cache miss from their branch and pick up the cache from the main branch. On subsequent commits, they will pick up the cache from their branch.Since this issue, we have changed from inline caching to registry caching and haven't seen the error return. I appreciate any help understanding the error message and correcting whats wrong with our caching setup.
Reproduction
I am having a hard time providing instructions for reproducing the issue. Here are the flags that were used in the failing CI job:
The issue seemed to happen more frequently on multi-stage builds where only the last cache layer is stored. Users reported that after re-running the job several times, sometimes the error would go away and they were able to complete the job.
Version information
/usr/bin/docker version Client: Docker Engine - Community Version: 28.4.0 API version: 1.51 Go version: go1.24.7 Git commit: d8eb465 Built: Wed Sep 3 20:57:05 2025 OS/Arch: linux/amd64 Context: default Server: Docker Engine - Community Engine: Version: 28.4.0 API version: 1.51 (minimum version 1.24) Go version: go1.24.7 Git commit: 249d679 Built: Wed Sep 3 20:58:50 2025 OS/Arch: linux/amd64 Experimental: false containerd: Version: v1.7.28 GitCommit: b98a3aace656320842a23f4a392a33f46af97866 runc: Version: 1.3.0 GitCommit: v1.3.0-0-g4ca628d docker-init: Version: 0.19.0 GitCommit: de40ad0 /usr/bin/docker info Client: Docker Engine - Community Version: 28.4.0 Context: default Debug Mode: false Plugins: buildx: Docker Buildx (Docker Inc.) Version: v0.26.1 Path: /usr/local/lib/docker/cli-plugins/docker-buildx compose: Docker Compose (Docker Inc.) Version: v2.39.4 Path: /usr/libexec/docker/cli-plugins/docker-compose Server: Containers: 1 Running: 1 Paused: 0 Stopped: 0 Images: 3 Server Version: 28.4.0 Storage Driver: overlay2 Backing Filesystem: extfs Supports d_type: true Using metacopy: false Native Overlay Diff: true userxattr: false Logging Driver: json-file Cgroup Driver: cgroupfs Cgroup Version: 2 Plugins: Volume: local Network: bridge host ipvlan macvlan null overlay Log: awslogs fluentd gcplogs gelf journald json-file local splunk syslog CDI spec directories: /etc/cdi /var/run/cdi Swarm: inactive Runtimes: runc io.containerd.runc.v2 Default Runtime: runc Init Binary: docker-init containerd version: b98a3aace656320842a23f4a392a33f46af97866 runc version: v1.3.0-0-g4ca628d init version: de40ad0 Security Options: seccomp Profile: builtin cgroupns Kernel Version: 6.1.148-173.267.amzn2023.x86_64 Operating System: Alpine Linux v3.22 (containerized) OSType: linux Architecture: x86_64 CPUs: 48 Total Memory: 371.7GiB Name: amd64-privileged-6jgxk-runner-62h75 ID: 02ced34e-dbee-4110-ab45-a455145b13ad Docker Root Dir: /home/runner/_work/docker Debug Mode: false Experimental: false Insecure Registries: ::1/128 127.0.0.0/8 Live Restore Enabled: false Product License: Community Engine