Improve automated GUI testing robustness using "GUI aware" technologies
With “GUI aware” I mean something like assistive technologies, which for instance the Linux Desktop Testing Project uses. That project is a nice example, because it is in Debian and has Ruby bindings. One issue will be that our target system will run in a VM, and this tool may be designed to run on the same host as the applications to be tested. However, another similar tool, the Robot Framework, explicitly supports the targets to run on a different system (but its not in Debian and I am unsure about Ruby bindings).
The goal would be to get something that is more robust than our current Sikuli-based (hence image driven) approach. Likely OCR (even if we find something better than Sikuli’s poor implementation) would not be an improvement.
Feature Branch: test/10721-a11y-technologies