Skip to content

Conversation

@dnephin
Copy link
Contributor

@dnephin dnephin commented Nov 21, 2017

No description provided.

Signed-off-by: Daniel Nephin <dnephin@docker.com>
@codecov-io
Copy link

codecov-io commented Nov 21, 2017

Codecov Report

Merging #709 into master will not change coverage.
The diff coverage is n/a.

@@           Coverage Diff           @@
##           master     #709   +/-   ##
=======================================
  Coverage   51.28%   51.28%           
=======================================
  Files         216      216           
  Lines       17743    17743           
=======================================
  Hits         9099     9099           
  Misses       8175     8175           
  Partials      469      469

Copy link
Member

@thaJeztah thaJeztah left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

ping @tonistiigi


result.Assert(t, icmd.Expected{Err: icmd.None})
assertBuildOutput(t, result.Stdout(), map[int]lineCompare{
0: prefix("Sending build context to Docker daemon"),
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe comment here that this is for catching regressions, not with stability guarantees(even seems to only check specific lines). Also, I'm not sure if these tests are supposed to work with all supported versions of the daemon or not. If they are, then we need to be more flexible in here.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we need to support multiple daemon versions we can provide different map[int]lineCompare for each supported version, but I don't think we should be less strict about the assertion. I don't think we run these against multiple versions just yet.

The reason it doesn't check every line is because the other lines are just ---> RANDOMID. We could add checks for those too, but I'm not sure we should consider those important parts of the build output.

Why do you think it shouldn't be for stability guarantees?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do you think it shouldn't be for stability guarantees?

For example, this changed in v17.10 and moby/moby#35549 is open atm. Probably many other times I don't remember.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A better check in here would be to check if the image exists and has right properties. Or if image built with iidfile is actually available.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see what you mean. I think it's ok if the output changes if we have a good reason, but it's bad if it changes accidentally so we should still be as strict as possible about the output. We could add the --> RANDOMID checks in a follow up.

A better check in here would be to check if the image exists and has right properties. Or if image built with iidfile is actually available.

No, that is the wrong thing to check. It is not the responsibility of the docker/cli e2e test suite to test the daemon. It should only test the interaction between the two components and behaviour of the cli (most of the cli behaviour should already be covered by unit tests). Checking if the image exists is the responsibility of the engine API tests. The engine reported that it created an image (the last line in the output) which is all the CLI should care about.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

From a scripting perspective the exit code may be enough, but interactive users should expect some consistency in output. This test does verify the exit code as well (result.Assert()).

It is easy to have a bug in cli that causes the build+inspect flow to get broken while the API works fine

Do you have an example of how that could happen? If both build and inspect are tested separately to behave correctly with a given input, how could the CLI break when used together?

Btw, this is why I pointed to use iidfile for this, as it is another cli feature.

Testing the iidfile contains an ID would be fine, but checking that image is available would be incorrect as part of the build test.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think this needs to be removed. Even in my first post, I only asked a comment.

Yes, understood. I just wanted to make sure I fully understood your comment correctly, so I wanted to ask a few questions to clarify.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you have an example of how that could happen?

For example, if the cli decides to implement/guarantee some of the --rm, --force-rm functionality on the client side.

Testing the iidfile contains an ID would be fine, but checking that image is available would be incorrect as part of the build test.

In the API and old integration-cli tests we are not limiting ourselves to a requirement that a single test may only do requests against a single endpoint. It is important that the commands work correctly together not only that they work as an isolated subset. Having this limitation just makes it so that some important cases are not covered. In a lot of cases it doesn't make any sense as well, for example to effectively test docker attach command you need to use it together with create/start/wait/pause/rm/restart/stop/kill etc.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we are not limiting ourselves to a requirement that a single test may only do requests against a single endpoint.

I should clarify. I'm not saying that it's incorrect because it's hitting a second endpoint. I'm saying it's incorrect because it's testing behaviour that is outside the scope of the repository. The scope of this repo is the CLI behaviour: read user input, perform operation (often this is one or more API requests), output to user. That is what should be tested. The engine already reported that it created the image (by sending the message "Successfully built ..."), doing an inspect after that would be testing the internal state of the engine, which is why it's incorrect.

There are plenty of tests in the e2e suite which do make use of multiple endpoints (and commands) to setup state, but there is only ever a single command being tested at once.

for example to effectively test docker attach command you need to use it together with create/start/wait/pause/rm/restart/stop/kill etc.

To effectively test docker attach you need the engine to be in a specific state that accurately reflects the normal runtime state. One safe way of setting up that state is to use create/start/wait, that's true. But it needs to be clear that those other commands are only being used as part of setup. They are not the commands being tested.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the engine API integration suite I think it would be appropriate to inspect after build, because in that cause the engine state is relevant to the thing being tested. So that functionality should already be covered.

@dnephin dnephin merged commit 6476524 into docker:master Nov 23, 2017
@GordonTheTurtle GordonTheTurtle added this to the 17.12.0 milestone Nov 23, 2017
@dnephin dnephin deleted the e2e-docker-build branch November 23, 2017 18:54
@dnephin
Copy link
Contributor Author

dnephin commented Nov 23, 2017

Going to merge since this has 2 LGTM and discussion is more about general testing philosophy and not directly related to this change.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants