|
|
---
|
|
|
title: Faster builds
|
|
|
---
|
|
|
|
|
|
|
|
|
We want to streamline the DX by having faster builds. We think having
|
|
|
parity between "devel" and "real" release is important, maybe we can
|
|
|
make exceptions, maybe not.
|
|
|
|
|
|
We didn't have enough data on the typical times that a build takes. This
|
|
|
means we weren't really sure about what we should optimize.
|
|
|
|
|
|
Several ideas are collected here as subsections.
|
|
|
|
|
|
Data
|
|
|
--------
|
|
|
|
|
|
This data has been collected after the meeting.
|
|
|
|
|
|
Looking at a build of a feature branch made by iguana:
|
|
|
|
|
|
- total time is around 18minutes
|
|
|
- Package installation ends at 05:10
|
|
|
- Build of TBB ends at 05:51
|
|
|
- Generating locales ends at 07:12 (take little more than one minute)
|
|
|
- cleanup ends at 08:13
|
|
|
- squashfs starts at 08:57 and ends at 15:03
|
|
|
- Archiving artifacts starts at 16:13 and ends at 18:04 (this is maybe
|
|
|
Jenkins-specific?)
|
|
|
|
|
|
Looking at a build made by lizard:
|
|
|
- total time is 44:45
|
|
|
- package installation ends at 11:34
|
|
|
- squashfs starts at 22:26 ends at 37:59
|
|
|
|
|
|
Workflows
|
|
|
--------------
|
|
|
|
|
|
It's acceptable for us not to improve the build-time when building from
|
|
|
scratch, but improve the iteration of it. So let's define workflows so that we
|
|
|
can see how much each solution is good for this workflows
|
|
|
|
|
|
### 1-file-changed
|
|
|
|
|
|
The developer just changes 1 python file in `chroot-local_includes/`.
|
|
|
This is the most common.
|
|
|
|
|
|
### 1-package-added
|
|
|
|
|
|
The developer adds 1 package
|
|
|
|
|
|
### tbb-upgraded
|
|
|
|
|
|
tbb version is changed
|
|
|
|
|
|
|
|
|
Layered build
|
|
|
--------------------
|
|
|
|
|
|
This is docker-style, and could maybe happen with dockerfile indeed. The main
|
|
|
idea is that our build process have stages. If we can avoid running a stage
|
|
|
that we already have data for, we can speed up the process substantially.
|
|
|
Layers could be:
|
|
|
|
|
|
- base system
|
|
|
- `chroot-local_packageslists/`
|
|
|
- `chroot-local_includes/`
|
|
|
- `chroot-local_hooks/`
|
|
|
- squashfs all the previous things
|
|
|
- prepare final image
|
|
|
|
|
|
We could do this quite easily with something like dockerfile or, more probably,
|
|
|
buildah. This technique will probably give us substantial advantages for the
|
|
|
first stages. As it is, this is probably not as great as it could: changing any
|
|
|
file will also re-run hooks, which is not that cheap. A simple fix to this is
|
|
|
splitting `chroot-local_includes/`:
|
|
|
|
|
|
- `chroot-local_00-includes/`
|
|
|
- `chroot-local_00-hooks/`
|
|
|
- `chroot-local_01-includes/`
|
|
|
- `chroot-local_01-hooks/`
|
|
|
|
|
|
This is still layered, but if you change a file in 01-includes, only hooks in
|
|
|
01-hooks will be run. Hopefully, the most expensive hooks will go in 00-hooks,
|
|
|
and most files will go in 01-includes, which means that changing them will not
|
|
|
trigger hooks in 00-hooks/
|
|
|
|
|
|
Pro:
|
|
|
|
|
|
- doesn't seem too hard!
|
|
|
- there will probably be some good advantage. looking at the data I
|
|
|
collected, the `1-file-changed` workflow can have ~40% speedup.
|
|
|
`tbb-upgraded` should also be pretty good.
|
|
|
|
|
|
Cons:
|
|
|
- the squashfs time is not improved at all
|
|
|
|
|
|
Layered squashfs
|
|
|
---------------------
|
|
|
|
|
|
This idea is meant to be an extension of "Layered build". It goes this
|
|
|
way:
|
|
|
|
|
|
- you do a layered build
|
|
|
- you squashfs each layer separately
|
|
|
- the boot system is adapted to layer all those layers (in the right order, of
|
|
|
course)
|
|
|
|
|
|
This will probably make the squashfs part much faster: you can cache squashfs
|
|
|
files, and `1-file-changed`
|
|
|
|
|
|
Pro:
|
|
|
- faster
|
|
|
|
|
|
Cons:
|
|
|
- an image built this way will be quite different than one built for
|
|
|
release, because it has multiple squashfs files. This will probably
|
|
|
|
|
|
be irrelevant in many development practices, but maybe not all of
|
|
|
them.
|
|
|
|
|
|
Horizontal composition
|
|
|
-------------------------
|
|
|
|
|
|
Some of our hooks (ie: tor browser) essentially create something that must go
|
|
|
inside the chroot. Can we cache them? We can, if we clearly separate what they
|
|
|
do and what they need. We could have many of this "modules" that are
|
|
|
self-contained, are built only when necessary, produce an artifact which is
|
|
|
later included in the chroot.
|
|
|
|
|
|
This is not mutually exclusive with _Layered build_.
|
|
|
|
|
|
Pro:
|
|
|
- parallelization!
|
|
|
|
|
|
Con:
|
|
|
- Not clear how relevant it is
|
|
|
- moving things to this kind of modules is not always easy
|
|
|
|
|
|
Early-patching
|
|
|
-----------------
|
|
|
|
|
|
This is very optimized for post-chroot changes, so `1-file-changed` or
|
|
|
tbb-changed.
|
|
|
|
|
|
Make an initramfs hook that, if the cmdline includes a "devpatch"
|
|
|
option, will:
|
|
|
|
|
|
```
|
|
|
mount -t 9p includes /somewhere/ && rsync -ra /somewhere/ /
|
|
|
```
|
|
|
|
|
|
where of course the `includes` share must point to `chroot-local_includes/` or,
|
|
|
better, to some directory where we merged the results of
|
|
|
`chroot-local_includes/` and the build artifacts produced by _Horizontal
|
|
|
composition_.
|
|
|
|
|
|
If you want to run the test suite against your new system, we need to add an
|
|
|
option to `run_test_suite` that adds the `devpatch` option to the cmdline.
|
|
|
|
|
|
Pro:
|
|
|
- this will probably be extremely fast for `1-file-changed`!
|
|
|
- not many changes needed to our code setup
|
|
|
|
|
|
Con:
|
|
|
- this doesn't speed up every scenario
|
|
|
- there might be cases in which the system run this way is not really the same
|
|
|
as the one that would be built.
|
|
|
|
|
|
Question:
|
|
|
- should we keep this initrfamfs hook also in production images or not? I
|
|
|
think that we can keep it, because you still need to enable it via an
|
|
|
explicit option to the kernel. That was good for
|
|
|
`autotest_never_use_this_option`, so the only downsides will be adding `9p`
|
|
|
kernel module to the initramfs, which doesn't seem so bad.
|
|
|
|
|
|
OSTree
|
|
|
----------
|
|
|
|
|
|
**TODO:** there were many ideas about how the layered thing could lead to OSTree and be related to [Endless upgrades](Endless_upgrades.md). Insert them here.
|
|
|
|
|
|
|