-
Notifications
You must be signed in to change notification settings - Fork 21
Enable versioning for build images #68
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
@shishirb-MSFT Just a draft for your review for now, I'm going to wait until the other PR for imagefactory gets checked in so we can use do-adu-build-[VERSION] rather than the old machine. As a result of still using the old machine, the non-docker builds on this branch will fail (no libcurl). Note: there are failures in deb9 builds resulting from std::bad_cast(): Aside from that, I'm also thinking about replacing native builds entirely with docker builds, and instead running tests and things like binary size checking by mounting the build output back to the host machine. This should help us not maintain ImageFactory artifacts. See example run here: In reply to: 915645193 In reply to: 915645193 In reply to: 915645193 In reply to: 915645193 |
| # Publishes the binaries + packages as artifacts. | ||
|
|
||
| variables: | ||
| - name: version |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks like github comment isnt showing up in codeflow:
It's needed in order to specify which docker image version to use. I can't think of a way to extract version # from github directly
With this approach, the latest development branch will always have this version set to the next release version
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I meant can't we hardcode it in the yaml file itself? When will we need to update the version on the fly just before scheduling a pipeline run?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For the case where we need to patch a previous release. For example if we needed to patch v0.7.0 which uses cpprest - we can now specify it through the pipeline.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Okay consider renaming to something more specific. vmImageVersion?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
maybe dependencyVersion. it might be used for both the docker image version & for specifying the image of the vmImage within the pool
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
buildEnvVersion?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i think imageVersion is fine - referring to both vm & docker images. Added a comment next to it as well.
Waiting on the PR completion from ImageFactory and seeing runs go through before checking this in.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
PR is completed, there appears to be an issue with updating the image payload within cloudtest - as a result, the reported new do-adu-build-0.8.0 image does not have libcurl installed, despite imagefactory logs showing libcurl being installed during image provisioning.
I'm going to complete the PR for now, but we won't see green builds until we get a response from the imagefactory /cloudtest team.
|
How are the docker images updated with latest patches? In reply to: 915645193 |
|
Also, I don't want dev local builds+testing to be different from pipeline build+testing as much as possible. So it would be nice if we can keep native builds. In reply to: 915658658 |
|
Docker images will still be manually patched if we need to make dependency changes. For native builds - there's no difference except the build itself happens inside of a docker image. This solves the problem of having to update the artifact we own in Imagefactory everytime we make a change to bootstrap.sh (no need to update that image to install libcurl, because the build itself will run in a docker container which we can update easily). In reply to: 915661193 |
|
re:patches - I meant security/OS patches. The reason I'm asking is the whole point of moving to 1ES was to let them take care of VM/image patching. In reply to: 915665454 |
|
I'm not sure that same concern applies for containers. The container image is secure because it's just that - an image. Whereas a vm running a hosted agent has various attack vectors for bad actors to take advantage of, and it's connected to a network. Note: I don't think we can remove the usage of artifacts entirely, we will still need the artifact to provision some things like docker engine. 1ES handles all security of the base image In reply to: 915674475 |
|
S360 has recently flagged security issues in our MCC instances also which uses docker containers. This is when we had to ask MCC team to update the container from 1804 to 2004 ubuntu. In reply to: 915679890 |
|
What about runtime dependencies when running tests on the host VM? We'll need libcurl installed at runtime as well. In reply to: 916332369 |
|
I was thinking about that too - the test run here passes: For test binaries - we could link to all libraries statically (though I'm not 100% sure if package-installed libraries all have static counterparts installed alongside). This for the case where we may use some dependency in test code that isn't used by the component itself. re: container security - MCC is an actively running container on an active device. With the pipeline runs we have an isolated environment where the image (garunteed to be good, since its hosted on our ACR - unless ACR gets compromised), is pulled, then used within the run. Still, it's a good point. I'll spend some time reaching out to the right people or reviewing the 1ES migration notes to see if there's anything more specific on this. In reply to: 916335207 |
shishirb-MSFT
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
![]()
|
Discussed offline: For now, I will manually patch container images. In reply to: 916339889 |
Uh oh!
There was an error while loading. Please reload this page.