-
Notifications
You must be signed in to change notification settings - Fork 3.8k
Bump cmake version for GPU build #11156
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
The cmake version (3.10) in Ubuntu 18.04 does not cope well with the more advanced cmake use in libtorch surrounding the CUDA target. We switch to a self-built cmake 3.14 (already used by arm and i386 CI). The context for this is apache#10758 .
|
@Mousius Seems TF-aarch64 v2.6.2 we try to install at https://github.com/apache/tvm/blob/main/docker/install/ubuntu_install_tensorflow_aarch64.sh#L29 is no longer available? See https://ci.tlcpack.ai/blue/organizations/jenkins/tvm/detail/PR-11156/1/pipeline/ Maybe we should switch to TF 2.7? |
|
Looks like https://snapshots.linaro.org/ldcg/python-cache/tensorflow-aarch64/ is down for some reason, I've reached out to find out why. The reason for tvm/docker/install/ubuntu_install_tensorflow.sh Lines 23 to 26 in 9fd279b
Upgrading to |
|
@driazati For a PR like this where only |
|
@t-vi / @masahi it looks like https://snapshots.linaro.org/ldcg/python-cache/tensorflow-aarch64/ is back, I've restarted your build in Jenkins to see if we can get this moving 😸 |
|
Thank you, @Mousius ! |
The cmake version (3.10) in Ubuntu 18.04 does not cope well with the more advanced cmake use in libtorch surrounding the CUDA target. We switch to a self-built cmake 3.14 (already used by arm and i386 CI). The context for this is apache#10758 .
The cmake version (3.10) in Ubuntu 18.04 does not cope well with the
more advanced cmake use in libtorch surrounding the CUDA target.
We switch to a self-built cmake 3.14 (already used by arm and i386 CI).
The context for this is #10758 .
Thank you @masahi @driazati for your helpful discussion, errors are my own.