Skip to content
This repository was archived by the owner on Nov 17, 2023. It is now read-only.
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions ci/Jenkinsfile_docker_cache
Original file line number Diff line number Diff line change
Expand Up @@ -38,6 +38,7 @@ core_logic: {
timeout(time: total_timeout, unit: 'MINUTES') {
utils.init_git()
sh "python3 ./ci/docker_cache.py --docker-registry ${env.DOCKER_CACHE_REGISTRY}"
sh "cd ci && docker-compose -f docker/docker-compose.yml build --parallel && docker-compose -f docker/docker-compose.yml push "
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you pull that into docker_cache.py to leverage parallelisation wrt non updated images and also providing one single interface?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The idea is to delete docker_cache.py later, so the docker-compose -f docker/docker-compose.yml build --parallel && docker-compose -f docker/docker-compose.yml push would be the unified interface.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure what parallelism you refer to. Is docker-compose build --parallel not sufficient?

Copy link
Contributor

@marcoabreu marcoabreu Apr 27, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I understand.

It's not sufficient in the sense of that docker-cache and docker compose are run in sequence. Alternatively, create a second parallel step so it runs on another node?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

docker-cache will only be used temporarily. I think the current solution is fine given that the first step (sh "python3 ./ci/docker_cache.py) will be deleted later.

}
}
}
Expand Down
10 changes: 7 additions & 3 deletions ci/build.py
Original file line number Diff line number Diff line change
Expand Up @@ -47,17 +47,21 @@
'centos7_gpu_cu101', 'centos7_gpu_cu102', 'ubuntu_cpu',
'ubuntu_build_cuda', 'ubuntu_gpu_cu101', 'publish.test.centos7_cpu',
'publish.test.centos7_gpu')
# Files for docker compose
DOCKER_COMPOSE_FILES = set(('docker/build.centos7', 'docker/build.ubuntu', 'docker/publish.test.centos7'))


def get_dockerfiles_path():
return "docker"


def get_platforms(path: str = get_dockerfiles_path()) -> List[str]:
def get_platforms(path: str = get_dockerfiles_path(), legacy_only=False) -> List[str]:
"""Get a list of architectures given our dockerfiles"""
dockerfiles = glob.glob(os.path.join(path, "Dockerfile.*"))
dockerfiles = list(filter(lambda x: x[-1] != '~', dockerfiles))
files = list(map(lambda x: re.sub(r"Dockerfile.(.*)", r"\1", x), dockerfiles))
dockerfiles = set(filter(lambda x: x[-1] != '~', dockerfiles))
files = set(map(lambda x: re.sub(r"Dockerfile.(.*)", r"\1", x), dockerfiles))
if legacy_only:
files = files - DOCKER_COMPOSE_FILES
platforms = list(map(lambda x: os.path.split(x)[1], sorted(files)))
return platforms

Expand Down
2 changes: 1 addition & 1 deletion ci/docker_cache.py
Original file line number Diff line number Diff line change
Expand Up @@ -190,7 +190,7 @@ def script_name() -> str:

args = parser.parse_args()

platforms = build_util.get_platforms()
platforms = build_util.get_platforms(legacy_only=True)

secret_name = os.environ['DOCKERHUB_SECRET_NAME']
endpoint_url = os.environ['DOCKERHUB_SECRET_ENDPOINT_URL']
Expand Down
18 changes: 13 additions & 5 deletions tests/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -62,13 +62,21 @@ Ninja is a build tool (like make) that prioritizes building speed. If you will b

## Runing Python Tests Within Docker

1. To run tests inside docker run the following comamdn
```
ci/build.py --platform {PLATFORM} /work/runtime_functions.sh {RUNTIME_FUNCTION}
```
To run tests inside docker, you first need to install `docker` and `docker-compose` on your machine.

On Ubuntu you may install them via `sudo apt-get install docker.io docker-compose`
and set them up via `sudo usermod $(whoami) -G docker -a`.

Then, to run tests inside docker run the following command
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you also add pip3 install -r ci/requirements.txt
Thanks

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's add it in a separate PR to unblock merging this fix. We can also consider getting rid of that dependency by moving relying on standard tools instead of using our custom scripts.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Makes sense. But custom scripts sort of make job easier for lazy people like me :p having to google steps for getting those standard tools. What do you reckon? Apart from taking the onus of ensuring testing is correct rather than leaving it to users to find their way out of the problems that may come up with setting up those standard tools on their own.


```
ci/build.py --platform {PLATFORM} /work/runtime_functions.sh {RUNTIME_FUNCTION}
```

An example for running python tests would be

```
ci/build.py --platform build_ubuntu_cpu_mkldnn /work/runtime_functions.sh unittest_ubuntu_python3_cpu PYTHONPATH=./python/ pytest tests/python/unittest
ci/build.py --platform build_ubuntu_cpu_mkldnn /work/runtime_functions.sh unittest_ubuntu_python3_cpu
```


Expand Down