Make our CI feedback loop shorter
Originally created by @intrigeri on #16960 (Redmine)
Outdated blueprint: https://tails.boum.org/blueprint/hardware_for_automated_tests_take3/
Current situation
We currently have 2 physical machines dedicated to CI:
- dragon, with 24 threads running at 3800Mhz and 128GB RAM
- iguana, with 16 threads running at 3700Mhz and 128GB RAM
There are also the Jenkins orchestrator and various isobuilders and isotesters running on lizard. Ideally, these will become deprecated by the time this issue is closed.
Iguana is already running a gitlab-runner, isobuilders, and isotesters.
Goal
We want to have generic isoworkers (that can handle both build and test jobs), as well a gitlab runner and perhaps the Jenkins orchestrator running on our dedicated CI machines. We need to evaluate whether it is best to have the Jenkins orchestrator on one of our fast CI machines or whether we're better of keeping the orchestrator on lizard and instead use these resources on the fast machines for an extra isoworker.
- The generic isoworkers should have 4 vcpu's and 16GB of RAM.
- gitlab-runner should also have 4 vcpu's and at least 8GB of RAM.
- Jenkins orchestrator should have 2 vcpu's and at least 4GB of RAM.
Steps
-
deploy 5 isoworkers on dragon -
optimise I/O caching on dragon and its VM's -
experiment with CPU vulnerability mitigations on dragon: #17387 (closed) -
replace the current isobuilders/testers on iguana with 3 generic isoworkers -
expand the gitlab-runner's resources on iguana -
ensure the Jenkins orchestrator prioritises jobs to workers running on dragon, then to iguana, and then to lizard. -
after two months, evaluate whether the isobuilders/testers on lizard are still heavily used. if they are, add another isoworker to dragon. if not, migrate the jenkins orchestrator to dragon and: -
remove the isotesters on lizard.
Subtasks
Related issues
- Related to #11680 (closed)
- Related to #17216
- Related to tails#17361 (closed)
- Related to tails#16959 (closed)
- Related to #17387 (closed)
- Follows #15501 (closed)