Skip to content

Conversation

@gonfunko
Copy link
Contributor

This PR adds infrastructure for writing screenreader tests using Guidepup. This is quite experimental and a bit temperamental, but having the ability to at least make some assertions about screenreader behavior seems valuable nonetheless.

Requirements

Currently this test setup only supports Chrome and VoiceOver on macOS. It should be relatively straightforward to extend it to support Chrome and NVDA on Windows. Other browsers on both platforms are probably possible as well, but more involved; Guidepup is not tied to Chrome in any way, but our existing test infrastructure which this PR reuses is.

You'll need to complete the Local Setup portion of this guide; Guidepup also offers a tool that may automate some of this, but I haven't tried it. The CI setup portion of the guide is not necessary, nor anything under the Additional System Permissions section. The first time(s) you run the test suite you may be bombarded by various approval dialogs which should be granted.

Running the tests

Once your system is set up, you should just be able to do npm run test:screenreader.

For best results:

  • Make sure that Chrome is not running before you run the tests. Chrome needs to be focused for VoiceOver to act on it, and if you already have it running, that instance may get focused rather than the one launched by WebDriver.
  • Set the VoiceOver speech rate as high as possible. Guidepup has to wait for it to talk, so this has a direct impact on how long the tests take to run.
  • Guidepup automates VoiceOver with AppleScript, but AppleScript support has some fairly major bugs in Tahoe. In particular, it uses it to check if VoiceOver is running; this check seems redundant to another one that it makes using ps and I suspect is there to verify that AppleScript automation of VoiceOver has been enabled, but AFAICT is unnecessary; commenting out const appleScriptRunning... in node_modules/@guidepup/guidepup/lib/macOS/VoiceOver/isRunning.js makes running the tests much less flaky.

Writing tests

  • Generally, it seems to work best to navigate to the desired element via voiceOver.press('key goes here') rather than our navigation utilities.
  • Using assert.include() rather than a strict comparison of the screenreader output is helpful, as depending on context VoiceOver sometimes faffs around and includes instructions or other unrelated output.
  • I disabled the Chai diff truncation, because typically the easiest approach is to write a test comparing the output to a dummy value and copy-paste the relevant bit, rather than trying to listen or copy it out of VoiceOver's text output window.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants