Skip to content

Conversation

@jimson-msft
Copy link
Contributor

@jimson-msft jimson-msft commented Sep 8, 2021

  • Docker build containers now support specifying version. This version aligns with the next github release version in semantic versioning format.
  • Also specify vmImage name as the host machine needs to be patched to use libcurl instead of cpprest.

@jimson-msft jimson-msft requested a review from a team as a code owner September 8, 2021 23:37
@jimson-msft
Copy link
Contributor Author

jimson-msft commented Sep 8, 2021

@shishirb-MSFT Just a draft for your review for now, I'm going to wait until the other PR for imagefactory gets checked in so we can use do-adu-build-[VERSION] rather than the old machine. As a result of still using the old machine, the non-docker builds on this branch will fail (no libcurl).

Note: there are failures in deb9 builds resulting from std::bad_cast():
https://deliveryoptimization.visualstudio.com/client/_build/results?buildId=3256&view=logs&j=f64b1c2f-1b39-5c58-9002-e3b993974b94&t=520448b5-31ed-5301-4234-9625ec8fd8ff
https://deliveryoptimization.visualstudio.com/client/_build/results?buildId=3249&view=logs&j=f64b1c2f-1b39-5c58-9002-e3b993974b94

Aside from that, I'm also thinking about replacing native builds entirely with docker builds, and instead running tests and things like binary size checking by mounting the build output back to the host machine. This should help us not maintain ImageFactory artifacts. See example run here:
https://deliveryoptimization.visualstudio.com/client/_build?definitionId=58&_a=summary
Based off of this branch:
https://github.com/microsoft/do-client/tree/user/jimson/ubuntu1804_docker
If this works, then I'll move those changes over into this branch, and delete our usage of imagefactory artifacts altogether.


In reply to: 915645193


In reply to: 915645193


In reply to: 915645193


In reply to: 915645193

# Publishes the binaries + packages as artifacts.

variables:
- name: version
Copy link
Collaborator

@shishirb-MSFT shishirb-MSFT Sep 9, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

version

Do we need this variable? Wouldn't we just update the version in a branch when it needs to be changed? #Resolved

Copy link
Contributor Author

@jimson-msft jimson-msft Sep 9, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like github comment isnt showing up in codeflow:

It's needed in order to specify which docker image version to use. I can't think of a way to extract version # from github directly
With this approach, the latest development branch will always have this version set to the next release version

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I meant can't we hardcode it in the yaml file itself? When will we need to update the version on the fly just before scheduling a pipeline run?

Copy link
Contributor Author

@jimson-msft jimson-msft Sep 9, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For the case where we need to patch a previous release. For example if we needed to patch v0.7.0 which uses cpprest - we can now specify it through the pipeline.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay consider renaming to something more specific. vmImageVersion?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe dependencyVersion. it might be used for both the docker image version & for specifying the image of the vmImage within the pool

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

buildEnvVersion?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i think imageVersion is fine - referring to both vm & docker images. Added a comment next to it as well.

Waiting on the PR completion from ImageFactory and seeing runs go through before checking this in.

Copy link
Contributor Author

@jimson-msft jimson-msft Sep 10, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

PR is completed, there appears to be an issue with updating the image payload within cloudtest - as a result, the reported new do-adu-build-0.8.0 image does not have libcurl installed, despite imagefactory logs showing libcurl being installed during image provisioning.

I'm going to complete the PR for now, but we won't see green builds until we get a response from the imagefactory /cloudtest team.

@shishirb-MSFT
Copy link
Collaborator

How are the docker images updated with latest patches?


In reply to: 915645193

@shishirb-MSFT
Copy link
Collaborator

Also, I don't want dev local builds+testing to be different from pipeline build+testing as much as possible. So it would be nice if we can keep native builds.


In reply to: 915658658

@jimson-msft
Copy link
Contributor Author

jimson-msft commented Sep 9, 2021

Docker images will still be manually patched if we need to make dependency changes.
There might be a way for us to setup pipelines which create new docker images and push to our ACR instance if the pipeline detects changes in bootstrap.sh

For native builds - there's no difference except the build itself happens inside of a docker image.
Everything else - such as other build output (test exes for sdk + client) are written to the host machine, and still ran from there. Packages are installed/removed on the host.

This solves the problem of having to update the artifact we own in Imagefactory everytime we make a change to bootstrap.sh (no need to update that image to install libcurl, because the build itself will run in a docker container which we can update easily).


In reply to: 915661193

@shishirb-MSFT
Copy link
Collaborator

re:patches - I meant security/OS patches. The reason I'm asking is the whole point of moving to 1ES was to let them take care of VM/image patching.


In reply to: 915665454

@jimson-msft
Copy link
Contributor Author

jimson-msft commented Sep 9, 2021

I'm not sure that same concern applies for containers. The container image is secure because it's just that - an image. Whereas a vm running a hosted agent has various attack vectors for bad actors to take advantage of, and it's connected to a network.
We garuntee that by using an image associated with our ACR instance that when that image is spun into a container, we know exactly what's on it. In order for a container to get targeted, the host machine would already be compromised.
The host machine that is running the containers is still automatically managed by 1ES and patched when needed automatically

Note: I don't think we can remove the usage of artifacts entirely, we will still need the artifact to provision some things like docker engine. 1ES handles all security of the base image


In reply to: 915674475

@shishirb-MSFT
Copy link
Collaborator

S360 has recently flagged security issues in our MCC instances also which uses docker containers. This is when we had to ask MCC team to update the container from 1804 to 2004 ubuntu.
Maybe the tooling hasn't caught up to scanning docker images on 1ES builds but I'm sure we'll get there.


In reply to: 915679890

@shishirb-MSFT
Copy link
Collaborator

What about runtime dependencies when running tests on the host VM? We'll need libcurl installed at runtime as well.


In reply to: 916332369

@jimson-msft
Copy link
Contributor Author

jimson-msft commented Sep 9, 2021

I was thinking about that too - the test run here passes:
https://deliveryoptimization.visualstudio.com/client/_build/results?buildId=3262&view=logs&j=e76d410b-906a-5a54-f5d2-89a839293c70&t=242881bb-7330-5492-b5ab-fe5d3f311053
The reason it's able to run is because libcurl is brought in by task "Install agent Debian package". Same with other runtime dependencies.

For test binaries - we could link to all libraries statically (though I'm not 100% sure if package-installed libraries all have static counterparts installed alongside). This for the case where we may use some dependency in test code that isn't used by the component itself.

re: container security - MCC is an actively running container on an active device. With the pipeline runs we have an isolated environment where the image (garunteed to be good, since its hosted on our ACR - unless ACR gets compromised), is pulled, then used within the run.
After the run completes, the container is decomissioned. The key here I think is if the host vm can get compromised during the duration of the run, which 1ES will handle for us via updating/patching. I'm also not sure - but mcc also exposes itself to the hosts network correct? It listens on a specific port on the host,, and as a result outside traffic can get routed into the container hosting MCC - this might result in it getting flagged by s360 where our build container usage is not.

Still, it's a good point. I'll spend some time reaching out to the right people or reviewing the 1ES migration notes to see if there's anything more specific on this.


In reply to: 916335207

Copy link
Collaborator

@shishirb-MSFT shishirb-MSFT left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

:shipit:

@jimson-msft
Copy link
Contributor Author

Discussed offline:
Since we are not shipping a container image or using the container in a production environment, we will not get flagged by s360.
For our use case, the guidance is to update these container images as a best practice - the same way we would for any vm.

For now, I will manually patch container images.
Opening this task to create pipelines that periodically patch all our container images:
https://microsoft.visualstudio.com/OS/_workitems/edit/35945036


In reply to: 916339889

@jimson-msft jimson-msft changed the title Enable versioning for build containers Enable versioning for build images Sep 10, 2021
@jimson-msft jimson-msft merged commit 6d0fc3e into feature/replace-cpprest Sep 10, 2021
@jimson-msft jimson-msft deleted the user/jimson/pipelines branch September 10, 2021 23:03
shishirb-MSFT pushed a commit that referenced this pull request Sep 16, 2021
shishirb-MSFT pushed a commit that referenced this pull request Sep 16, 2021
shishirb-MSFT pushed a commit that referenced this pull request Sep 16, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants