Commit 9ca1b553 authored by intrigeri's avatar intrigeri
Browse files

Merge branch 'SponsorS-leftovers'

parents 5ee2fd8c d4561bef
[[!meta title="Automated tests implementation details"]]
See the [[design and implementation
documentation|contribute/working_together/roles/sysadmins/Jenkins]]
of our current setup.
For Jenkins resources, see [[blueprint/automated_builds_and_tests/resources]].
[[!toc levels=2]]
Generating jobs
===============
We use code that lay in three different Git repositories to generate
automatically the list of Jenkins jobs for branches that are active in
the Tails main Git repo.
The first brick is the Tails
[[!tails_gitweb_repo pythonlib]], which extracts the list of
active branches and output the needed informations. This list is parsed
by the `generate_tails_iso_jobs` script run by a cronjob and deployed by
our [[!tails_gitweb_repo puppet-tails]]
`tails::jenkins::iso_jobs_generator` manifest.
This script output yaml files compatible with
[jenkins-job-builder](http://docs.openstack.org/infra/jenkins-job-builder).
It creates one `project` for each active branches, which in turn uses
three JJB `job templates` to create the three jobs for each branch: the
ISO build one, and wrapper job that is used to start the ISO test jobs.
This changes are pushed to our [[!tails_gitweb_repo jenkins-jobs]] git
repo by the cronjob, and thanks to their automatic deployment in our
`tails::jenkins::master` and `tails::gitolite::hooks::jenkins_jobs`
manifests in our [[!tails_gitweb_repo puppet-tails]] repo, this new
changes are applied automatically to our Jenkins instance.
Restarting slave VMs between jobs
=================================
This question is tracked in [[!tails_ticket 9486]].
For [[!tails_ticket 5288]] to be robust enough, if the test suite doesn't
_always_ clean between itself properly (e.g. when tests simply hang
and timeout), we might want to restart `isotesterN.lizard` between
each each ISO testing job.
If such VMs are Jenkins slave, then we can't do it as part of the job
itself, but workarounds are possible, such as having a job restart and
wait for the VM, that triggers another job that actually starts the
tests. Or, instead of running `jenkins-slave` on those VMs, running
one instance thereof somewhere else (in a Docker container on
`jenkins.lizard`?) and then have "restart the testing VM and wait for
it to come up" be part of the testing job.
This was discussed at least there:
* <http://jenkins-ci.361315.n4.nabble.com/How-to-reboot-a-slave-during-a-build-td4628820.html>
* <https://stackoverflow.com/questions/5543413/reconfigure-and-reboot-a-hudson-jenkins-slave-as-part-of-a-build>
We achieve this VM reboot by using 3 chained jobs:
* First one is a wrapper and trigger 2 other jobs. It is executed on the
isotester the test job is supposed to be assigned to. It puts the
isotester in offline mode and starts the second job, blocking while
waiting for it to complete. This way this isotester is left reserved
while the second job run, and the isotester name can be passed as a build
parameter to the second job. This job is low prio so it waits for
other second and third type of jobs to be completed before starting its
own.
* The second job is executed on the master (which has 4 build
executors). This job ssh into the said isotester and issue the
reboot. It needs to wait a reasonable amount of time for the Jenkins
slave to be stopped by the shutdown process so that no jobs gets assigned
to this isotester meanwhile. Stoping this Jenkins slave daemon usually
takes a few seconds. During testing, 5 seconds proved to be enough of
a delay for that, and more would mean unnecessary lagging time. It then
put the node back online again. This job is higher prio so that it is
not lagging behind other wrapper jobs in the queue.
* The third job is the test job, run on the freshly started isotester.
This one is high prio too to get executed before any other wrapper
jobs. These jobs are set to run concurrently, so that if a first one is
already running, a more recent one triggered by a new build will still
be able to run and not be blocked by the first running one.
<a id="chain"></a>
Chaining jobs
......@@ -111,30 +41,6 @@ These are all supported by JJB v0.9+.
As we'll have to pass some parameters, the ParameterizedTrigger plugin
is the best candidate for us.
Passing parameters through jobs
===============================
We already specified what kind of informations we want to pass from the
build job to the test job.
The ParameterizedTiggerPlugin is the one usually used for that kind of
work.
We'll use it for some basic parameter passing through jobs, but given
the test jobs will need to know a lot of them from the build job, we'll
also use the EnvInject plugin we're already using:
* In the build job, a script will collect every necessary parameters
defined in the automated test blueprint and outputing them in a file
in the /build-artifacts/ directory.
* This file is the one used by the build job, to setup the variables it
needs (currently only $NOTIFY_TO).
* At the end of the build job, this file is archived with the other
artifacts.
* At the beginning of the chained test job, this file is imported in
the workspace along with the build artifacts. The EnvInject pre-build
step uses it to setup the necessary variables.
Define which $OLD_ISO to test against
=====================================
......@@ -173,21 +79,3 @@ In the end, we will by default use the same ISO for both `--old-iso` and
`--iso`, except for the branches used to prepare releases (`devel` and
`stable`), so that we know if the upgrades are broken long before the
next release.
Retrieving the ISOs for the test
================================
We'll need a way to retrieve the different ISO needed for the test.
For the ISO related to the upstream build job, this shouln't be a
problem with #9597. We can get it with either wget, or a python script
using python-jenkins. That was the point of this ticket.
For the last release ISO, we have several means:
* Using wget to get them from http://iso-history.tails.boum.org. This
website is password protected, but we could set up another private
vhost for the isotesters.
* Using the git-annex repo directly.
We'll use the first one, as it's easier to implement.
......@@ -317,10 +317,7 @@ Below, importance level is evaluated based on:
* signing keys are managed with the `tails_secrets_jenkins` Puppet module
- web server:
* some configuration in the manifest ([[!tails_ticket 7107]])
* design documentation:
- [[sysadmins/automated_builds_in_Jenkins]]
- [[sysadmins/automated_tests_in_Jenkins]]
- [[blueprint/automated_builds_and_tests/jenkins]]
* design documentation: [[sysadmins/Jenkins]]
* importance: critical (as a key component of our development process)
## Mail
......
[[!meta title="Automated ISO/IMG builds and tests on Jenkins"]]
[[!toc levels=1]]
Generating jobs
===============
We generate automatically a set of Jenkins jobs for branches that are
active in the Tails main Git repository.
The first brick extracts the list of active branches and output the
needed information:
- [[!tails_gitweb config/chroot_local-includes/usr/lib/python3/dist-packages/tailslib/git.py]]
- [[!tails_gitweb config/chroot_local-includes/usr/lib/python3/dist-packages/tailslib/jenkins.py]]
This list is parsed by the `generate_tails_iso_jobs` script run by
a cronjob and deployed by our [[!tails_gitweb_repo puppet-tails]]
`tails::jenkins::iso_jobs_generator` manifest.
This script output YAML files compatible with
[jenkins-job-builder](http://docs.openstack.org/infra/jenkins-job-builder).
It creates one `project` for each active branch, which in turn uses
several JJB `job templates` to create jobs for each branch:
- `build_Tails_ISO_*`
- `reproducibly_build_Tails_ISO_*`
- `test_Tails_ISO_*`
This changes are pushed to our [[!tails_gitweb_repo jenkins-jobs]] git
repo by the cronjob, and thanks to their automatic deployment in our
`tails::jenkins::master` and `tails::gitolite::hooks::jenkins_jobs`
manifests in our [[!tails_gitweb_repo puppet-tails]] repo, these new
changes are applied to our Jenkins instance.
Passing parameters through jobs
===============================
We pass information from build job to follow-up jobs (reproducibility
testing, test suite) via two means:
- the Parameterized Trigger plugin, whenever it's sufficient
- the EnvInject plugin, for more complex cases:
* In the build job, a script collects the needed information and
writes it to a file that's saved as a build artifact.
* This file is used by the build job itself, to setup the variables it
needs (currently only `$NOTIFY_TO`).
* Follow-up jobs imported this file in the workspace along with the
build artifacts, then use an EnvInject pre-build step to load it
and set up variables accordingly.
# Builds
See [[contribute/working_together/roles/sysadmins/automated_builds_in_Jenkins]].
# Tests
See [[contribute/working_together/roles/sysadmins/automated_tests_in_Jenkins]].
[[!meta title="Automated ISO builds on Jenkins"]]
[[!meta title="Automated ISO/IMG builds on Jenkins"]]
We re-use the [[Vagrant-based build system|contribute/build/vagrant-setup]] we
have created for developers.
......
[[!meta title="Automated ISO tests on Jenkins"]]
[[!meta title="Automated ISO/IMG tests on Jenkins"]]
[[!toc levels=1]]
[[!toc levels=2]]
# Full test suite vs. scenarios tagged `@fragile`
# For developers
## Full test suite vs. scenarios tagged `@fragile`
Jenkins generally only runs scenarios that are _not_ tagged `@fragile`
in Gherkin. But it runs the full test suite, including scenarios that
......@@ -17,7 +19,7 @@ are tagged `@fragile`, if the images under test were built:
Therefore, to ask Jenkins to run the full test suite on your topic
branch, give it a name that ends `+force-all-tests`.
# Trigger a test suite run without rebuilding images
## Trigger a test suite run without rebuilding images
Every `build_Tails_ISO_*` job run triggers a test suite run
(`test_Tails_ISO_*`), so most of the time, we don't need
......@@ -35,7 +37,7 @@ get the test suite running eventually:
Thankfully, there is a way to trigger a test suite run without having
to rebuild images first. To do so, start a "build" of the
corresponding `wrap_test_Tail_ISO_*` job, passing to the
corresponding `test_Tail_ISO_*` job, passing to the
`UPSTREAMJOB_BUILD_NUMBER` parameter the ID of the `build_Tail_ISO_*`
job build you want to test.
......@@ -44,10 +46,18 @@ Do <strong>not</strong> directly start a <code>test_Tail_ISO_*</code> job:
this is not supported. It would fail most of the time in confusing ways.
</div>
# Old ISO used in the test suite in Jenkins
## Jenkins jobs you can safely ignore
The success/failure of the `keep_node_busy_during_cleanup` job does
not matter.
# For sysadmins
## Old ISO used in the test suite in Jenkins
Some tests like upgrading Tails are done against a Tails installation made from
the previously released ISO.
the previously released ISO and USB images. Those images are retrieved
using wget from <https://iso-history.tails.boum.org>.
In some cases (e.g when the _Tails Installer_ interface has changed), we need to
temporarily change this behaviour to make tests work. To have Jenkins
......@@ -77,3 +87,51 @@ use the ISO being tested instead of the last released one:
2. File a ticket to ensure this temporarily change gets reverted
in due time.
## Restarting slave VMs between test suite jobs
For background, see [[!tails_ticket 9486]], [[!tails_ticket 11295]],
and [[!tails_ticket 10601]].
Our test suite doesn't _always_ clean after itself properly (e.g.
when tests simply hang and timeout), so we have to reboot
`isotesterN.lizard` between ISO test jobs. We have [[!tails_ticket
17216 desc="ideas"]] to solve this problem, but that's where we're at.
We can't reboot these VMs as part of a test job itself: this would
fail the test job even when the test suite has succeeded.
Therefore, each "build" of a `test_Tail_ISO_*` job runs the test suite,
and then:
1. Triggers a high priority "build" of the
`keep_node_busy_during_cleanup` job, on the same node.
That job will ensure the isotester is kept busy until it has
rebooted and is ready for another test suite run.
1. Gives Jenkins some time to add that `keep_node_busy_during_cleanup`
build to the queue.
1. Gives the Jenkins Priority Sorter plugin some time to assign its
intended priority to the `keep_node_busy_during_cleanup` build.
1. Does everything else it should do, such as cleaning up and moving
artifacts around.
1. Finally, triggers a "build" of the `reboot_node` job on the Jenkins
master, which will put the isotester offline, and reboot it.
1. After the isotester has rebooted, when `jenkins-slave.service` starts,
it puts the node back online.
For more details, see the heavily commented implementation in
[[!tails_gitweb_repo jenkins-jobs]]:
- `macros/test_Tails_ISO.yaml`
- `macros/keep_node_busy_during_cleanup.yaml`
- `macros/reboot_node.yaml`
## Executors on the Jenkins master
We need to ensure the Jenkins master has enough executors configured
so it can run as many `reboot_job` concurrent builds as necessary.
This job can't run in parallel for a given `test_Tails_ISO_*` build,
so what we strictly need is: as many executors on the master as we
have nodes allowed to run `test_Tails_ISO_*`. This currently means: as
many executors on the master as we have isotesters.
Supports Markdown
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment