-
Notifications
You must be signed in to change notification settings - Fork 22
Multibranch pipeline job example #2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
@@liuguo09 please take a look. |
84a068b to
fb72bc2
Compare
|
I have updated the commits to include @liuguo09 CI script, so maybe the pull request can be used/merged actually. |
|
@maht - What is the URL of the job's results? |
|
Here is a test build: |
|
@davids5
I have not tested in builds.apache.org Jenkins, since I have no rights to
create the pipeline job.
I tested it on an external Jenkins server I created, that is pointing to my
forks of the repos (github.com/maht/*) instead of apache ones:
http://nuttx.ci.midocloud.net:8080/job/nuttx/job/master/2/ (classical
view)
http://nuttx.ci.midocloud.net:8080/blue/organizations/jenkins/nuttx/detail/master/2/pipeline
(new "Blue Ocean" UI style)
Last one failed because I have no 'make' in the Jenkins slave (I need to
install it when I have time).
The definition of the job is the same as described by
https://github.com/apache/incubator-nuttx-testing/pull/2/files#diff-f3b70337f738c1b93a5fe2f8c215770a
, so it can be used to create it on builds.apache.org by someone with
enough rights.
Feedback is welcome :)
…On Tue, Feb 4, 2020 at 1:54 PM Xiang Xiao ***@***.***> wrote:
Here is a test build:
https://builds.apache.org/view/Incubator%20Projects/job/NuttX-Nightly-Build/18/console
It fail due to this know issue:
apache/nuttx#102
<apache/nuttx#102>
We need fix it to get the stable output.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#2>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AA5UI77TNH5Q5OGDT3NDGALRBFQQBANCNFSM4KMVT5FQ>
.
|
|
One more comment: the advantage of the pipeline job respect a normal one is that each pull request will create its own "build history", so different pull requests results are not mixed together, providing a better feedback about the pull request "weather" (stability). Maybe this is not important for nightly builds of the Each pull request will appear as a new entry with its own job history in the following links: |
|
I don't think that huge nightly builds triggered by a cron and tailored sanity checks triggered by PRs are mutually exclusive. I would like to have both. |
|
I think that we should hook it with github and run it automatically for each incoming PR |
|
Totally agree, and that was and is the original intention. This is just a skeleton to build upon: I think we should merge this pull request, and then, once it is being triggered correctly by pull request according to the agreed workflow, we could improve it by changing this groovy shared library (we won't need to ever (or seldom) change the For example, I just used the We can iterate this aside of the core OS repo. For example, it could be easy to modify the groovy script to make it look like the parallelized pipeline @davids5 have shown. To do that, another Jenkinsfile will be added to testing repo itself, and another multibranch pipeline should be created, so that the pipeline itself can be tested (from pull request branches to But not having still these jobs defined already in builds.apache.org makes it difficult to test/iterate by now. Hope that this clarify my proposal. |
|
I have no rights on |
That's right. As @maht said, cibuild.sh and testlist files are designed in mind to support both nightly build and check build with I also agree that it would be better to use jenkins pipeline way for more advantages as @maht and @davids5 mentioned. |
|
Yes, regarding time concerns |
@maht I amn't familiar with Jenkins, so you can work with @liuguo09 to finish the pipeline, but I can help to do any amin work if required. |
Why? If full does not work the build is broken and the PR should not come in. |
|
Hi,
I think the " nightly build" mind set is not the way this should be looked at. 24Hr cycle time is not going to move this project along. The build need to be parallelized and with results like 100 config builds in < 10 Minutes
Just be aware that the build servers are a shared resource from all projects and we are expected to not make unreasonable demands on it. I’m not sure if the above would be unreasonable or not.
Thanks,
Justin
|
Yet another great argument for github actions. |
|
I pushed for this a couple times but it seemed full steam ahead for Jenkins. My plan was to wait for some of the ci scripts to stabilize and make my case again. Especially with the free access to windows and osx builds. These days of modern CI, I don't know very many other OSS projects that are moving onto Jenkins. Most are there because they have it already and it works. A lot have moved to gitlab, circleci, travis, etc... This all said I don't have the time to do the work right now so I should really just stay out of it if others want to. |
I have just simply tested github action in my fork with https://github.com/liuguo09/incubator-nuttx-testing/blob/github-action-test-1/.github/workflows/ci.yml. And the result in https://github.com/liuguo09/incubator-nuttx-testing/runs/427035372?check_suite_focus=true. It really do a good job. I think it doesn't matter much whether use Apache jenkins or Github actions to do CI. The most important thing would be how to define PR check build, to choose some typical configs build or full build. If check build should behave like full build, then how to reduce the whole build time. In my previous testbuild result with Apache jenkins(apache build server) and Github action(using Github-hosted 2-cores cpu runner), it takes 3~4 hours to finish the whole build process. Use more powerful apache build server or powerful self-hosted github action runner may help. Or make some changes to do out of tree build to speedup parallelize configs build time. Wish more discussion coming : ) |
|
You can run between 20-60 concurrent jobs depending on what account type the Apache org is under. I would expect each build to be it's own job, and you can set it to fail fast. I have also set these up before to tailor the jobs based on labels. You could have a minimal set that always run and then add CI/ARM label and have it trigger every Arm build, but this may be over engineering at this point. |
Yes. This will help. You do realize this is all in the PX4 CI and we are reinventing the wheel one spoke at a time here. |
Well, in theory, that should be true. In practice, usually not, as far as I have seen in all my career. Simply put, running all the possible tests is impossible, because usually the number of possible tests to run is a number that can increase exponentially. Tomorrow I add a "on/off" flag, and I magically have doubled the number of possible tests: all the previous test running with "on", and all the previous tests running with "off". So, at some point, despite how big your capacity is, you have to think about what is minimum you want to try ('check') for the reasonable amount of resources you have. I am open to debate about what to include in "check" vs "full", or how to improve the way they are executed so more can be included or consume less resources or execution time. But saying that quality will be bad because we are not using the "full" set is meaningless. I can create an even "fuller" set of tests with performance, static code analysis, coverage, chaos testing, etc. Would that mean that we need to use "fuller" instead of "full"? Or should we avoid to create too much tests for fear of not including them? We need to set a line, or several, about what to test or not in each "test hierarchy" level (per pull request, nightly, releases, etc) so everything scales and is smooth, and we should adjust these lines over time. So let's focus on define which tests are more important/minimal for the pull request trigger.
I have never use GitHub actions, CircleCI, or any other CI service as much as Jenkins, but all of them are welcome if they help to check the pull request quality in a reliable way (i.e: without false alarms). I don't see why Jenkins has to be the only or preferred option (as long as Apache SF allow it, which seems the case). In the proposal [1] I wrote, I talk about Jenkins, but I also say: "The pipeline term is used here in a loose way. It means whatever arrange of tools and scripts to execute the CI. Specifically it does not restrict to the Jenkins CI pipelines.", so implementing the pipeline with other service is perfectly fine.
@davids5 I did not know the PX4 CI, but looks like very mature and curated. It would be wonderful to reuse as much from their know-how in our CI. I will take a look, but if you already know the internals and "tricks" used to improve the efficiency of the CI you are more than welcome. But this debate should be done in mail list or wiki and not in the comments of a pull request, I think... so, going back to the original matter of reviewing this pull request: does it worth to be merged despite it is not even near to be the optimal CI or the inertia/bad practices it will introduce will create a monster on the long term so it is preferred to work for something better? (I don't want to be the one creating that monster and be haunt forever 👻). [1] https://cwiki.apache.org/confluence/display/NUTTX/Continuous+integration+--+Miguel+Herranz |
|
PX4 software is welcome provided that it is properly donated via a software grant, has an Apache 2 license, and goes through the same PR process and rigorous review as any other donation. |
Yes and No. I am speaking about NuttX and building the board configs that are in the repo when a change affects an arch and the OS. Dead CONFIG_xxxxx are just line dead code - not tested and not of value. The build is broken when a board/config does not build. PR's should not break the build. - There is no wiggle room on this. If we need more coverage expand the configs for the board. Building an arch's boards (I,E STM32F7, STM32H7, imx etc) is how this is tested. Yes the test vectors blow up quick, but the baseline board configs should catch the untested and myopic PRs.
This was suggested along time ago, I was under the impression @liuguo09 was going to use it. |
|
As an Apache project, we do have follow certain rules before bringing in any 3rd party software. I think we need a software grant now. I understand that some existing 3rd party code can be granfathered in. Other 3rd party code and all new 3rd party code must have a formal, signed grant |
@maht I'll pull it to verify the pipeline way in my local jenkins firstly. And feedback later. |
| stages { | ||
| stage('Checkout') { | ||
| steps { | ||
| deleteDir() // clean up our workspace |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@maht, I used the scripts setup jenkins pipiline job locally well. But I notice the nuttx repo firstly checkouted before deleteDir from the build log. Is this right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No, sorry, this is not ok. It need to update the script. It should be checked out in the nuttx/ path. Let me fix it.
| } | ||
| stage('Builds') { | ||
| steps { | ||
| sh './testing/cibuild.sh -b full' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since I have update cibuild.sh to decouple -b option without setup repos. Should we checkout nuttx repo like the above apps and testing repo?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, we should checkout using the same code I guess, to checkout the specific commit and not the top of the master branch.
|
I pushed new versions of the commits. Now the main repository is checked out in |
|
That's good. Another thing I wonder if it is necessary for us to create a subdirectory jenkins to hold the pipeline job stuffs. And create subdir travis If travis CI stuffs coming. Anyway, it' a suggestion. You changes look good to me :) |
|
Well, not sure if the pipeline shared library can work from a subdirectory, according with documentation (https://jenkins.io/doc/book/pipeline/shared-libraries/). I agree it would be desirable, but if possible, it would require some extra research. I think that if not really a strong impediment and just I suggestion, we should go with default expected usage (or maybe just create an additional repository for the shared library, but this is also troublesome, I guess). I still have no Apache Jenkins access nor LDAP account. I think it would help to speed up the setup if I had access (possibly limited to a previously created job/folder inside Jenkins only for NuttX). Maybe @justinmclean can help with that, but not sure. If not I can help you setting the job, as commented before. |
This is an example of running NuttX for lm3s6965-ek board on QEMU.
|
Since the current ci system base on github action, let's close this now. |

This is an example of code to help to setup a multibranch pipeline job as described in this continuous integration draft proposal, to help discussion on the topic.
This pull request is not intended to be functional or merged as it is, unless used as scaffolding to integrated real NuttX scripts for build and test.
It consists in a pipeline script defined as a external library in the testing repo that can be used by another job in the main one.
The expected output should be similar to what can be seen in the following link: http://nuttx.ci.midocloud.net:8080/blue/pipelines