Evaluate cost/benefit of our automated tests
… and consider not running some by default, or even removing some entirely.
When @anonym and I did this for our manual "test suite", we found out that we were spending lots of time testing stuff that did not matter much, or that was unlikely to break again, or for other reasons was not worth the effort. I suspect it might be similar for our automated test suite.
We could do this with @boyska, which could be a nice way to give him an overview of what we're testing.
Criteria
- how much does this property matter? in other words, what's the impact if we break it?
- how likely is it that we break this property?
- performance of the test i.e. run time
- cost of maintenance of the test
New Cucumber tags
Documented in !348 (merged):
@slow
@not_release_blocker
Work in Progress
-
additional_software_packages.feature -
apt.feature -
build.feature -
dhcp.feature -
documentation.feature -
electrum.feature -
emergency_shutdown.feature -
encryption.feature -
erase_memory.feature -
evince.feature -
gnome.feature -
hardening.feature -
keys.feature -
localization.feature -
mac_spoofing.feature -
mat.feature -
networking.feature -
persistence.feature -
pidgin.feature -
po.feature -
root_access_control.feature -
sane_defaults.feature -
shutdown_applet.feature -
ssh.feature -
thunderbird.feature -
time_syncing.feature -
tor_bridges.feature -
tor_enforcement.feature -
torified_browsing.feature -
torified_git.feature -
torified_gnupg.feature -
torified_misc.feature -
tor_stream_isolation.feature -
totem.feature -
unsafe_browser.feature -
untrusted_partitions.feature -
usb_install.feature -
usb_upgrade.feature -
veracrypt.feature -
virtualization.feature -
whisperback.feature
Edited by intrigeri