Better extensibility rethink persistent additional software and containerization
The current approach for adding additional software is far from ideal (e.g. see #19926 (closed)), because we already use systemd within Tails, I'd like to suggest adding and better using systemd-nspawn to solve this. (On Debian it is shipped in a separate package systemd-container, see https://wiki.debian.org/nspawn)
Only focusing on additional software nspawn would allow us to have a separate userland for the additional software, but still be able to control them using the service manager and use the usual "normal" systemd features. It will also add another security boundary and simplify the process of installing additional software in a persistent way significantly. Currently, the process is quite hard, for one because some packages of Debian are so outdated that they're basically unusable by now, like i2pd, but sticking with that example if one would like to install and use it, the security measures we've within Tails are in the way in very complex ways and it can be very hard to "just install one application" for people that aren't familiar with the internal workings of Tails. Sticking with the i2pd example, it creates a new user and executes within that user's context. However, that user is unable to make any network connections because we have a whitelist of users that can connect to the Tor Socks proxy port within the system firewall. And making all of these changes persistent is also a challenge in itself.
If we'd provide a GUI wrapper around systemd-nspawn we could create something like Qubes or CoreOS, where everything users want to add goes into a separate userland, has its own set of libraries (and vulnerabilities) without affecting the main/host system.
Also, we could slim down the host system to a bare minimum system to reduce the attack vector and separate all other components into individual containers to further reduce the attack vector and make chaining vulnerabilities in different libraries and applications way harder. In this regard, the main change should be to get the physical network interfaces out of the same network namespace as all user processes are in. So that the only interfaces user processes see are the ones connecting it to the nspawn container where tor is running. So that it is impossible to bypass tor even if the firewall settings are misconfigured (which as outlined above) can easily happen when one wants to add additional software right now. Secondly, this would also prevent current issues with the firewall ruleset (and information leaks (public IPs)) when the client is in a network with public IPs (like university networks where each client gets a public IPv4 via DHCP, but that's a story for another day/ticket). Furthermore, I assume that "fears" of this kind of information leak are the reason why IPv6 isn't supported by Tails (even though Tor supports it just fine) right now, but having tor run in its own nspawn container would resolve this.
Network flow would become: firefox (x/user/normal namespace) => Tor namespace => Unsafe namespace (which controls the physical network interfaces)
This design change, together with an intuitive GUI, would also solve the use case of wanting to layer a VPN, either on top, next to, or below tor, as well as the i2pd use case. In this example, UI users would draw the network path through the namespaces/containers. The actual forwarding is the responsibility of the namespace in question. Also, the web browsers would be better isolated and it would be very easy to understand the implications of what connects where. Ideally, we'd be able to run each application in its own separate container, but that gets unnecessarily complicated and complex very quickly.
The workflow for the user that wants to add additional software could be as simple as clicking create new nspawn container/namespace/zone and getting a command line into it after the guest "booted". And in the backend when the user clicks on new nspawn button, a new userland will be initialized (we could also use CoW for this and start with the Dom0/Host filesystem overlayed by a per container persistent storage path), or a custom user provided one entirely (like archlinux or fedora). This GUI could also list all systemd-units from within the nspawn and allow to change them to/from autostarting, by creating the corresponding unit files within the persistent storage and having systemd after unlocking it pick them up and start them (and thereby the containers).