Commit fc44b07a authored by anonym's avatar anonym

Merge remote-tracking branch 'origin/master' into testing

parents 58a7e747 ed98ec37
......@@ -10,7 +10,7 @@ MUST
- Make it easy to ensure everything is answered
- Be able to follow an issue from the beginning to the end
- Statistics:
- Know how many users encountered the same issue
- Know how many users encountered the same issue. Spot the "Top bug".
- Be able to have stats on common issues
- Security of the platform:
- Allow secure deletion of information over time. Not keep a database forever (how long? what to keep?)
......@@ -52,9 +52,9 @@ evaluate the idea of basing Tails on snapshots of Debian testing.
sprints, nor between them;
* we get a feeling of how "being based on snapshots of Debian
testing" would work.
* December 5 2016 — Debian Stretch freeze starts
* April 2017 (???) — Debian Stretch is released
* April-June 2017 — Tails 3.0 is released
* February 5 2017 — Debian Stretch freeze starts
* June 2017 (???) — Debian Stretch is released
* June-August 2017 — Tails 3.0 is released
# Let's go rolling
......@@ -80,12 +80,6 @@ and a set of third parties tools listed here:
There are other tools that would be possible to explore like:
As a conclusion, the biggest difference in the current Tails version is the usage
of the Python interface for GTK3 (PyGI). The tools/libraries for Windows used in
the current upstream liveusb-creator version seem in principle like the same
used for Tails right now, except for the GUI parts and storage
operations that use udisks2.
# Analysis regarding operations on storage devices
According to <>,
......@@ -417,4 +411,19 @@ In another fresh windows 8.1VM
# Conclusion
We have outlined the main requirements we should face on fully porting the Tails
Installer to Windows. The major differences in the current Tails version regarding
the former upstream Fedora version, are the usage of the Python interface for GTK3
(PyGI) and the udisks2 library for disk operations.
We have found that there are alternative libraries we could use on Windows, in order
to perform the Linux specific operations. Some of them seem currently maintained and
updated, for instance, PyGI for Windows and extlinux. Other like sgdisk and its dependencies
might need some custom maintenance, which could rise the amount of required effort.
However, even with the extra required work, it seems totally likely that we can successfully
port Tails installer to Windows.
If our objective is to ease Tails adoption for Windows users, especially people that never have
used Linux, then this project may be worth the effort. The question
then becomes: is this the cheapest and/or best way to ease Tails
adoption for Windows users?
......@@ -18,9 +18,9 @@ This reports covers the activity of Tails in February 2016.
Everything in this report can be made public.
# A. Replace Claws Mail with Icedove
# Z. Example section title
## A.n. description of subsection
## Z.n. description of subsection
- A.n.m. description of deliverable: ticket numbers
......@@ -30,13 +30,307 @@ Everything in this report can be made public.
* what is the outcome (how it makes Tails better)
* what was not done, and why
# A. Replace Claws Mail with Icedove
XXX: u
The last few traces of Claws Mail has been purged from Tails
([[!tails_ticket 10904]]).
- A.1.5. Update Icedove documentation
The design documentation has been updated to reflect the current
state of the Icedove integration into
Tails. ([[!tails_ticket 10737]])
- A.X.X Upstream the secure Icedove autoconfig wizard
We've posted our patches upstream and received some first comments which approve of our concept, and even suggest to force the use SSL for all connections. ([[!tails_ticket 6156]])
We've successfully built Debian packages and tested our patches in Icedove ([[!tails_ticket 6564]])
# B. Improve our quality assurance process
## B.1. Automatically build ISO images for all the branches of our source code that are under active development
In February, **603 ISO images** were automatically built by our Jenkins
The code that manages the cleaning of previous builds leftovers on our
builder VMs is ready and waiting for reviews. ([[!tails_ticket 10772]])
## B.2. Continuously run our entire test suite on all those ISO images once they are built
In February, **597 ISO images** were automatically tested by our Jenkins
We decided on a way to collectively check for false positives in our
test infrastructure. We have planned a shift per month so that we're sure
one of us is taking care of them. ([[!tails_ticket 10993]])
We still need to check if two minor bugs are around, but they don't seem
to prevent our automated tests to be run. ([[!tails_ticket 10725]] and
[[!tails_ticket 10601]])
## B.3. Extend the coverage of our test suite
### B.3.10. Write automated tests for the new features in 2016Q1
XXX: anonym or intrigeri
* Write tests for connecting to hosts on a LAN via SSH
([[!tails_ticket 9087]])
* Automatically test the Greeter's Disable All Networking option
([[!tails_ticket 10340]])
### B.3.11. Fix newly identified issues to make our test suite more robust and faster
- Reduce peak space requirement for full test suite runs
Given the snapshot improvements reported about before
([[!tails_ticket 6094]]) our test suite accumulates more and more
virtual machine snapshots throughout a full run, and has to keep
most of them until the end. These occupy quite a bit of disk space,
which in our case translates into RAM since we want everything
stored in RAM for performance reasons. By optimizing the order in
which our features are run we have managed to reduce the peak space
requirement, which allows us to run more tests in parallel on the
same hardware. ([[!tails_ticket 10503]])
- Robustness improvements
Given the rather large amount of robustness issues we experience, we
have rethought our strategy and will try more fundamental approaches
to attacking them. The vast majority of issues fall into two
* Transient network issues: the Tor network simply isn't as reliable
as our test suite assumes, resulting in tests failing due to
unexpectedly long timeouts and similar. So far our approach has
been to make our tests retry the failing actions in the specific
places where they occur, much like how a user would deal with the
situation. However, this does not scale well, since it seems we
have to do this everywhere, and the code for this is not always
Instead we will improve the Tor network stability by making our
test suite set up its own private Tor network. This
should eliminate all network instability introduced by normal Tor
usage. ([[!tails_ticket 9521]])
In the long-term, and certainly outside the scope of this
contract, we would like to extend this network simulation so that
all network services used in tests are run locally, and the real
Internet is not used at all. ([[!tails_ticket 9519]],
[[!tails_ticket 9520]])
* Glitches when interacting with graphical user interfaces:
currently we simulate users with a black box approach where
we rely on exact images for the elements it interacts
with. This has turned out harder than anticipated, since modern
desktop environments do not behave as deterministically as one
would hope.
Our new plan is to leverage the interfaces used by assistive
technologies (like screen readers) to communicate the
structure and layout of graphical user interfaces to sight
impaired users. This will allow a more deterministic and reliable
approach for our test suite to interact with applications.
Besides this, the following specific robustness issues were fixed:
* Test that clicks the roadmap URL in Pidgin is fragile
([[!tails_ticket 10783]])
* The "I can view and print a PDF file stored in /usr/share"
scenario is fragile ([[!tails_ticket 10775]])
- Performance improvements on Jenkins
We figured we could probably get some nice test suite performance
improvements, on our Jenkins environment, by optimizing the platform
After an initial round of benchmarking conducted in January, our
next action was to give our server more RAM in order to give us more
flexibility to evaluate different configuration options. This was
done in February, and then we went through a few optimization
cycles, identifying bottlenecks and addressing them until we were
satisfied ([[!tails_ticket 11175]], [[!tails_ticket 11113]],
As a result, we have improved our test suite runs throughput, in the
worst case scenario, from 3.3 to 8 runs per hour. This gives us room
to run more automated tests in that environment, and also shortens
the feedback loop for developers, since congestion is now less
likely to happen. We will keep an eye on metrics to confirm, in one
or two months, that real workloads indeed benefit from
these changes.
## B.4. Freezable APT repository
This project was still on hold in February, while the developer
responsible for this project was focusing on other matters; we will
resume work on it in March. However, we mistakenly scheduled for
milestone V (April 15) two big projects that have the same developers,
so we decided to spread them more evenly over the remaining five
months of this contract; in April we will focus on C.1 (Change in
depth the infrastructure of our pool of mirrors), and here is the
updated schedule for the freezable APT repository project.
By the end of March, we want to:
* complete the design and discussion phase, that is "B.4.1.
Specify when we want to import foreign packages into which APT
suites" ([[!tails_ticket 9488]]), and "B.4.4. Design freezable APT
repository" ([[!tails_ticket 9487]]);
* make enough progress on "B.4.2. Implement a mechanism to save the
list of packages used at ISO build time" so it can be merged in
April, ideally in time for Tails 2.3;
* have a working proof-of-concept for most other essential pieces of
infrastructure and code.
Then, after a hiatus in April while we will be focused on our pool of
HTTP mirrors, in May we want to improve the freezable APT repository
as needed, aiming at merging code into the main development branch,
and deploying all pieces of infrastructure in production, by the end
of the month. Our current goal is to build Tails 2.4 (scheduled on
June 7th) using our freezable APT repository.
And then, we will still have two months, until the end of the
contract. This slack might be needed if previous steps take more time
than expected, and if not it will be time for us to identify remaining
issues, gather feedback from release managers and developers, and to
improve tools and documentation as we deem necessary.
# C. Scale our infrastructure
## C.1. Change in depth the infrastructure of our pool of mirrors
* C.1.1. Specify a way of describing the pool of mirrors
([[!tails_ticket 8637]])
We've designed a file format, encoded it into a JSON schema, created
a simple validation script, and published an example configuration
We have discussed with the developers of the Download And
Verification Extension (DAVE) for Firefox how it will be able to
leverage this configuration file, and the code we are writing for
"C.1.2. Write & audit the code that makes the redirection decision
from our website", so that DAVE uses our new mirror pool design
([[!tails_ticket 10284]]). This discussion made us confident that
what we have been working on so far is compatible with DAVE.
* C.1.3. Design and implement the mirrors pool administration process
and tools ([[!tails_ticket 8638]], [[!tails_ticket 11122]])
Building on top of what was done for C.1.1, a way to convey the
mirror pool's configuration to the dispatcher script, based on
ikiwiki underlays and Git, was designed and implemented.
Finally, we have organized our team to work on the next steps of this
project. A dedicated sprint will take place in April, during which we
want to complete all the needed programming, documentation and setup
tasks. Actual deployment might require more time, though: depending on
how fast mirror operators are to adjust to the new setup, we may have
to postpone the production deployment to May.
## C.2. Be able to detect within hours failures and malfunction on our services
- C.2.1. Research and decide what monitoring solution to use
what tools and abstraction layer to use for configuring it,
and where to host it: [[!tails_ticket 8645]]
We settled on a plan while refining the details of the implementation
of Icinga2 in our infrastructure.
We agreed to use its decentralized feature to isolate our monitored
systems from our monitoring one: a VM on the monitored host will be
set up as a Icinga2 `satellite`, and will collect the datas from the
other monitored systems, to send them back to the monitoring system
Icinga2 instance. The later will be the only one responsible for the
sending of notifications and will also be the one running the network
Icinga2 will be the agent we'll use on all systems to collect
monitoring datas.
We've also settled on the way to secure the communication between our
systems, and decided not to solely rely on icinga2 SSL certificates,
but to harden it using a VPN.
This was also necessary, because we chose to manage our monitoring
system with our current puppetmaster, which is hosted on the monitored
host. So both Icinga2 and puppet take benefits from this VPN.
[[!tails_ticket 10760]]
Still some of the deep details are quite blurry for the reviewer of
this design. We'll leave this discussion open, so that we can go on
with the deployement, while we'll be able discuss some other questions
that may raise later.
- C.2.2. Set up the monitoring software and the underlying infrastructure
We've deployed the VPN between our systems [[!tails_ticket 11094]],
which lead us to finish the install of the OS on the monitoring
machine. It's now managed by our puppetmaster as any other of our
systems. [[!tails_ticket 8647]]
We also installed the VM on our monitored host that will serve as the
satellite relay to our monitoring system. [[!tails_ticket 10886]]
We then started writing in our puppet manifests the recipes we learned
from the monitoring prototype tested on a developer machine. We had
Icinga2 installed on all of our systems with a basic configuration.
Then we configured it on the monitoring system as well as on the VM that
will be the satellite so that they are now both interconnected over
the VPN. [[!tails_ticket 8648]]
We still need to connect our Icinga2 agent instances on the rest of
our systems to this Icinga2 network. This will be done in the beginning
of March, and we'll then be able to implement the various checks we
defined in the blueprint, which are part of C.2.4 and C.2.6. Once
done, at the end of March we'll configure the notifications (C.2.5) and
will release our monitoring setup for the end of M5.
## C.4. Maintain our already existing services
We kept on answering the requests from the community as well as taking
care of security updates as covered by "C.4.5. Administer our services
up to milestone V".
We also did some background work to keep our infrastructure
sustainable on the long term:
* We made plans to upgrade to Debian 8 (Jessie) the small number of
Debian 7 (Wheezy) systems we still have ([[!tails_ticket 11178]],
[[!tails_ticket 11186]]).
* We modernized a little bit our Puppet setup. Notably, we converted
it from the deprecated Config File Environments, to the new
Directory Environments.
* We optimized our I/O-bound workloads, by spreading them over
multiple drives in a more efficient way.
# D. Migration to Debian Jessie
As reported last month, all remaining deliverables were completed
in January.
Still, as a follow-up we upgraded our ISO build system to Debian
Jessie, and then updated our Vagrant basebox and Jenkins ISO builders
accordingly ([[!tails_ticket 9262]]).
# E. Release management
- [[Tails 2.0.1|news/version_2.0.1]] was released on 2016-02-13 as an
emergency response to CVE-2016-1523 affecting Tor Browser:
* Enable the Tor Browser's font fingerprinting protections.
* Upgrade Tor Browser to 5.5.2.
* Repair 32-bit UEFI support.
......@@ -12,16 +12,19 @@ When you worked on a problem and published a paper about it, please let us know
The best way to reach us is through the [tails-dev]( mailinglist, or at our (possibly) encrypted address tails[AT]
## Academic communities
The questions posed by Tails might me of interest for researchers from various fields. A list of potentially interested communities that we are aware of can be seen below.
* Anonymity researchers - [PETS](
* Usable Privacy and Security - [SOUPS](
* Computer science in various fields - [USENIX](
## Research ideas
* [Randomness seeding](
* [Persistent Tor state](
* [Time syncing](
## Research on Tor
The Tor Project has a page dedicated to open ([research questions] that they face. Any problem that is solved at Tor, we benefit from and we welcome contributions to Tor.
## Academic communities
The questions posed by Tails might me of interest for researchers from various fields. A list of potentially interested communities that we are aware of can be seen below.
* Anonymity researchers - [PETS](
* Usable Privacy and Security - [SOUPS](
* Computer science in various fields - [USENIX](
......@@ -51,6 +51,69 @@ Dicevamo.. facciamole a piacimento. E prendiamocele.
2) doc_first_step_ABIM --> ???
3) doc_installation --> Zeyev
4) doc_persistence --> Zeyev
5) doc_first_step_start --> ???
6) doc_first_step_RUS --> ???
7) doc_get --> ???
8) first_level --> ???
# Dizionario
......@@ -18,6 +18,6 @@ Discussions
- [[!tails_ticket 11027 desc="Decide what to do with the old OpenPGP verification instructions"]]
- [[!tails_ticket 11047 desc="Decide how to handle the upcoming monthly reports"]]
- [[!tails_ticket 11099 desc="Decide which pinentry we want to ship"]]
- [[!tails_ticket 11135 desc="
Disable translations of /news/reports
- [[!tails_ticket 11042 desc="Which keyboard layout switching shortcuts to support"]]
- [[!tails_ticket 7874 desc="Find a more stable solution for Tails default chat support channel"]]:
shall we drop some of our requirements for candidate replacements?
......@@ -74,6 +74,8 @@ XXX: Add the diff from the previous month, for example:
- Our test suite covers SCENARIOS scenarios, DIFF more that in May.
* In February XXX ISO images were automatically built and tested by our continuous integration infrastructure. XXX=ask
......@@ -83,6 +85,8 @@ XXX: Look at the fundraising Git.
XXX: Look at the <> and <> archives.
ask ->
......@@ -109,3 +113,5 @@ Metrics
* Tails has been started more than BOOTS/MONTH times this month. This makes BOOTS/DAY boots a day on average.
* SIGS downloads of the OpenPGP signature of Tails ISO from our website.
* WHISPERBACK bug reports were received through WhisperBack.
ask ->
[[!meta title="Tails report for February, 2016"]]
* [[Tails 2.0.1 was released on February 12, 2016|news/version_2.0.1]] (minor release).
* [[A release candidate for 2.2 was released on February 14, 2016|news/test_2.2-rc1]]
* The next release (2.2) is [[planned for March 08|contribute/calendar]].
The following changes were introduced in Tails 2.0.1:
- Upgrade Tor Browser to [5.5.2](
- Fix regression breaking boot on 32-bit UEFI platforms. ([[!tails_ticket 11007 desc="#11007"]])
XXX: List important code work that is not covered already by the Release
section (for example, the changes being worked on for the next version).
A friendlier build system
For years Tails has offered a
[[build system based on Vagrant|contribute/build/#index2h1]], which at
times has been maintained and really easy to use, and so a great
resource for new contributors that want to test their
modifications. Sadly that was a while ago, mostly because all but one
Tails developers have been using their own custom build systems. The
main reason for that is that Vagrant uses Virtualbox by default, while
all of us (and our infrastructure, e.g. our Jenkins automated builds
and tests setup) greatly prefer the QEMU/KVM stack (and libvirt), and
it is not possible to mix two hypervisors at the same time.
But this is about to change! Recently there's been quite an effort to
[[!tails_ticket 6354 desc="migrate to vagrant-libvirt and the QEMU/KVM hypervisor"]]
which should allow all of us to converge to the same build
system. Besides saving development time since only one system has to
be maintained, it also means that this build system will be
well-maintained in the future, and so remain easy-to-use for
There are a few roadblocks still, though, but you can help, especially
if you are a Debian developer! Currently we need vagrant-libvirt (and
ruby-fog-libvirt) packaged and maintained in Debian, and
[quite a lot of work](
has already been done on that front. And to have the build system
working on Debian Jessie we need the following packages backported:
vagrant, ruby-excon, ruby-fog-core and ruby-fog-xml. If you want to
help, please get in touch with us on the
[ public mailing list](!
Documentation and website
* The blueprint for [[Porting Tails to Debian Stretch|blueprint/Debian_Stretch]] has been updated.
* Multiple commits where made to speed up the build process of the website
User experience
- We are working on replacing Vidalia (which has been unmaintained for years) with: (Closes: [[!tails_ticket 6841]])
* the Tor Status GNOME Shell extension, which adds a System Status
* [Onion Circuits] (, a simple Tor circuit monitoring tool.
* Hide "Laptop Mode Tools Configuration" menu entry. We don't support configuring l-m-t in Tails, and it doesn't work out of the box. (Closes: [[!tails_ticket 11074]])
* There is now a blueprint for [[Porting Tails Installer to OS X|blueprint/Port_Tails_Installer_to_OS_X/]]
* We're now also publishing torrents for betas and RCs (Closes: [[!tails_ticket 11126]])
* The ISO build system has been [upgraded to jessie](
* Our test suite covers 208 scenarios
* In February 603 ISO images were automatically built and 597 were automatically tested by our continuous integration infrastructure.
XXX: Look at the fundraising Git.
gitk --all --since='1 December' --until='1 January' origin/master
XXX: Look at the <> and <> archives.
Past events
Upcoming events
- Some of us where present at the [Tor Winter Dev Meeting 2016](
- Participation at the [Internet Freedom Festival]( with one Workshop on translating Tails to spanish and
one Workshop on the User experience of configuring Tor in Tails (and Tor Browser and Whonix)
- There will be a Tails 2.2 release party at [TetaLab](, March 8th at 18:00 in Toulouse, France.
On-going discussions
* A discussion about [[!tails_ticket 11162 desc="Creating personas to visualize our user base"]] has been started