-
Notifications
You must be signed in to change notification settings - Fork 6.7k
Enable docker cache build for images listed in docker-compose.yml #18179
Conversation
|
Hey @leezu , Thanks for submitting the PR
CI supported jobs: [unix-gpu, sanity, website, centos-cpu, miscellaneous, windows-cpu, unix-cpu, clang, edge, centos-gpu, windows-gpu] Note: |
6e8f14f to
566a68a
Compare
| timeout(time: total_timeout, unit: 'MINUTES') { | ||
| utils.init_git() | ||
| sh "python3 ./ci/docker_cache.py --docker-registry ${env.DOCKER_CACHE_REGISTRY}" | ||
| sh "cd ci && docker-compose -f docker/docker-compose.yml build --parallel && docker-compose -f docker/docker-compose.yml push " |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you pull that into docker_cache.py to leverage parallelisation wrt non updated images and also providing one single interface?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The idea is to delete docker_cache.py later, so the docker-compose -f docker/docker-compose.yml build --parallel && docker-compose -f docker/docker-compose.yml push would be the unified interface.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure what parallelism you refer to. Is docker-compose build --parallel not sufficient?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I understand.
It's not sufficient in the sense of that docker-cache and docker compose are run in sequence. Alternatively, create a second parallel step so it runs on another node?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
docker-cache will only be used temporarily. I think the current solution is fine given that the first step (sh "python3 ./ci/docker_cache.py) will be deleted later.
| On Ubuntu you may install them via `sudo apt-get install docker.io docker-compose` | ||
| and set them up via `sudo usermod $(whoami) -G docker -a`. | ||
|
|
||
| Then, to run tests inside docker run the following command |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you also add pip3 install -r ci/requirements.txt
Thanks
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's add it in a separate PR to unblock merging this fix. We can also consider getting rid of that dependency by moving relying on standard tools instead of using our custom scripts.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Makes sense. But custom scripts sort of make job easier for lazy people like me :p having to google steps for getting those standard tools. What do you reckon? Apart from taking the onus of ensuring testing is correct rather than leaving it to users to find their way out of the problems that may come up with setting up those standard tools on their own.
|
@mxnet-bot run ci [sanity] |
|
Jenkins CI successfully triggered : [sanity] |
|
@leezu it's supposed to fail on docker cache refresh? |
|
The run you point out fails due to flakyness of dockerhub. After retrigger, that error is gone. However, there's another error to be fixed by |
Enable docker cache build for images listed in docker-compose.yml
Ignore
docker-compose.ymlrelated files inci/build_cache.py